0% found this document useful (0 votes)
108 views32 pages

Concurrency: ISBN 0-321-49362-1

Uploaded by

Hikage23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views32 pages

Concurrency: ISBN 0-321-49362-1

Uploaded by

Hikage23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 13

Concurrency

ISBN 0-321-49362-1
Chapter 13 Topics
• Introduction
• Introduction to Subprogram-Level Concurrency
• Semaphores
• Monitors
• Message Passing
• Ada support for Concurrency
• Java Threads
• C# Threads
• Concurrency in Functional Languages
• Statement-Level Concurrency

Copyright © 2018 Pearson. All rights reserved. 1-2


Introduction

• Concurrency can occur at four levels:


– Machine instruction level
– High-level language statement level
– Unit level
– Program level
• Because there are no language issues in
instruction- and program-level
concurrency, they are not addressed here

Copyright © 2018 Pearson. All rights reserved. 1-3


Multiprocessor Architectures
• Late 1950s - one general-purpose processor and
one or more special-purpose processors for input
and output operations
• Early 1960s - multiple complete processors, used
for program-level concurrency
• Mid-1960s - multiple partial processors, used for
instruction-level concurrency
• Single-Instruction Multiple-Data (SIMD) machines
• Multiple-Instruction Multiple-Data (MIMD)
machines
• A primary focus of this chapter is shared memory
MIMD machines (multiprocessors)
Copyright © 2018 Pearson. All rights reserved. 1-4
Categories of Concurrency
• Categories of Concurrency:
– Physical concurrency - Multiple independent
processors ( multiple threads of control)
– Logical concurrency - The appearance of
physical concurrency is presented by time-
sharing one processor (software can be
designed as if there were multiple threads of
control)
• Coroutines (quasi-concurrency) have a
single thread of control
• A thread of control in a program is the
sequence of program points reached as
control flows through the program
Copyright © 2018 Pearson. All rights reserved. 1-5
Motivations for the Use of Concurrency

• Multiprocessor computers capable of physical


concurrency are now widely used
• Even if a machine has just one processor, a
program written to use concurrent execution can
be faster than the same program written for
nonconcurrent execution
• Involves a different way of designing software that
can be very useful—many real-world situations
involve concurrency
• Many program applications are now spread over
multiple machines, either locally or over a network

Copyright © 2018 Pearson. All rights reserved. 1-6


Introduction to Subprogram-Level
Concurrency
• A task or process or thread is a program
unit that can be in concurrent execution
with other program units
• Tasks differ from ordinary subprograms in
that:
– A task may be implicitly started
– When a program unit starts the execution of a
task, it is not necessarily suspended
– When a task’s execution is completed, control
may not return to the caller
• Tasks usually work together

Copyright © 2018 Pearson. All rights reserved. 1-7


Two General Categories of Tasks

• Heavyweight tasks execute in their own


address space
• Lightweight tasks all run in the same
address space – more efficient
• A task is disjoint if it does not
communicate with or affect the execution
of any other task in the program in any way

Copyright © 2018 Pearson. All rights reserved. 1-8


Task Synchronization

• A mechanism that controls the order in


which tasks execute
• Two kinds of synchronization
– Cooperation synchronization
– Competition synchronization
• Task communication is necessary for
synchronization, provided by:
- Shared nonlocal variables
- Parameters
- Message passing

Copyright © 2018 Pearson. All rights reserved. 1-9


Kinds of synchronization
• Cooperation: Task A must wait for task B to
complete some specific activity before task
A can continue its execution, e.g., the
producer-consumer problem
• Competition: Two or more tasks must use
some resource that cannot be
simultaneously used, e.g., a shared counter
– Competition is usually provided by mutually
exclusive access (approaches are discussed
later)

Copyright © 2018 Pearson. All rights reserved. 1-10


Need for Competition Synchronization

Task A: TOTAL = TOTAL + 1


Task B: TOTAL = 2 * TOTAL

- Depending on order, there could be four different results


Copyright © 2018 Pearson. All rights reserved. 1-11
Scheduler

• Providing synchronization requires a


mechanism for delaying task execution
• Task execution control is maintained by a
program called the scheduler, which maps
task execution onto available processors

Copyright © 2018 Pearson. All rights reserved. 1-12


Task Execution States

• New - created but not yet started


• Ready - ready to run but not currently
running (no available processor)
• Running
• Blocked - has been running, but
cannot now continue (usually waiting
for some event to occur)
• Dead - no longer active in any sense

Copyright © 2018 Pearson. All rights reserved. 1-13


Liveness and Deadlock

• Liveness is a characteristic that a program


unit may or may not have
- In sequential code, it means the unit will
eventually complete its execution
• In a concurrent environment, a task can
easily lose its liveness
• If all tasks in a concurrent environment lose
their liveness, it is called deadlock

Copyright © 2018 Pearson. All rights reserved. 1-14


Design Issues for Concurrency
• Competition and cooperation
synchronization*
• Controlling task scheduling
• How can an application influence task
scheduling
• How and when tasks start and end
execution
• How and when are tasks created
* The most important issue

Copyright © 2018 Pearson. All rights reserved. 1-15


Methods of Providing Synchronization

• Semaphores
• Monitors
• Message Passing

Copyright © 2018 Pearson. All rights reserved. 1-16


Semaphores

• Dijkstra - 1965
• A semaphore is a data structure consisting of a
counter and a queue for storing task descriptors
– A task descriptor is a data structure that stores all of the
relevant information about the execution state of the task
• Semaphores can be used to implement guards on
the code that accesses shared data structures
• Semaphores have only two operations, wait and
release (originally called P and V by Dijkstra)
• Semaphores can be used to provide both
competition and cooperation synchronization

Copyright © 2018 Pearson. All rights reserved. 1-17


Cooperation Synchronization with
Semaphores

• Example: A shared buffer


• The buffer is implemented as an ADT with
the operations DEPOSIT and FETCH as the
only ways to access the buffer
• Use two semaphores for cooperation:
emptyspots and fullspots
• The semaphore counters are used to store
the numbers of empty spots and full spots
in the buffer

Copyright © 2018 Pearson. All rights reserved. 1-18


Cooperation Synchronization with
Semaphores (continued)
• DEPOSIT must first check emptyspots to
see if there is room in the buffer
• If there is room, the counter of emptyspots
is decremented and the value is inserted
• If there is no room, the caller is stored in
the queue of emptyspots
• When DEPOSIT is finished, it must
increment the counter of fullspots

Copyright © 2018 Pearson. All rights reserved. 1-19


Cooperation Synchronization with
Semaphores (continued)
• FETCH must first check fullspots to see if
there is a value
– If there is a full spot, the counter of fullspots
is decremented and the value is removed
– If there are no values in the buffer, the caller
must be placed in the queue of fullspots
– When FETCH is finished, it increments the
counter of emptyspots
• The operations of FETCH and DEPOSIT on
the semaphores are accomplished through
two semaphore operations named wait and
release
Copyright © 2018 Pearson. All rights reserved. 1-20
Semaphores: Wait and Release Operations

wait(aSemaphore)
if aSemaphore’s counter > 0 then
decrement aSemaphore’s counter
else
put the caller in aSemaphore’s queue
attempt to transfer control to a ready task
-- if the task ready queue is empty,
-- deadlock occurs
end

release(aSemaphore)
if aSemaphore’s queue is empty then
increment aSemaphore’s counter
else
put the calling task in the task ready queue
transfer control to a task from aSemaphore’s queue
end

Copyright © 2018 Pearson. All rights reserved. 1-21


Producer and Consumer Tasks
semaphore fullspots, emptyspots;
fullspots.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
-- produce VALUE –-
wait (emptyspots); {wait for space}
DEPOSIT(VALUE);
release(fullspots); {increase filled}
end loop;
end producer;
task consumer;
loop
wait (fullspots);{wait till not empty}}
FETCH(VALUE);
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;

Copyright © 2018 Pearson. All rights reserved. 1-22


Competition Synchronization with
Semaphores
• A third semaphore, named access, is used
to control access (competition
synchronization)
– The counter of access will only have the values
0 and 1
– Such a semaphore is called a binary semaphore
• Note that wait and release must be atomic!

Copyright © 2018 Pearson. All rights reserved. 1-23


Producer Code for Semaphores
semaphore access, fullspots, emptyspots;
access.count = 0;
fullstops.count = 0;
emptyspots.count = BUFLEN;
task producer;
loop
-- produce VALUE –-
wait(emptyspots); {wait for space}
wait(access); {wait for access)
DEPOSIT(VALUE);
release(access); {relinquish access}
release(fullspots); {increase filled}
end loop;
end producer;

Copyright © 2018 Pearson. All rights reserved. 1-24


Consumer Code for Semaphores
task consumer;
loop
wait(fullspots);{wait till not empty}
wait(access); {wait for access}
FETCH(VALUE);
release(access); {relinquish access}
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;

Copyright © 2018 Pearson. All rights reserved. 1-25


Evaluation of Semaphores

• Misuse of semaphores can cause failures in


cooperation synchronization, e.g., the
buffer will overflow if the wait of
fullspots is left out
• Misuse of semaphores can cause failures in
competition synchronization, e.g., the
program will deadlock if the release of
access is left out

Copyright © 2018 Pearson. All rights reserved. 1-26


Monitors

• Ada, Java, C#
• The idea: encapsulate the shared data and
its operations to restrict access
• A monitor is an abstract data type for
shared data

Copyright © 2018 Pearson. All rights reserved. 1-27


Competition Synchronization

• Shared data is resident in the monitor


(rather than in the client units)
• All access resident in the monitor
– Monitor implementation guarantee
synchronized access by allowing only one
access at a time
– Calls to monitor procedures are implicitly
queued if the monitor is busy at the time of the
call

Copyright © 2018 Pearson. All rights reserved. 1-28


Cooperation Synchronization

• Cooperation between processes is still a


programming task
– Programmer must guarantee that a shared
buffer does not experience underflow or
overflow

Copyright © 2018 Pearson. All rights reserved. 1-29


Evaluation of Monitors

• A better way to provide competition


synchronization than are semaphores
• Semaphores can be used to implement
monitors
• Monitors can be used to implement
semaphores
• Support for cooperation synchronization is
very similar as with semaphores, so it has
the same problems

Copyright © 2018 Pearson. All rights reserved. 1-30


Message Passing
• Message passing is a general model for
concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
• Central idea: task communication is like
seeing a doctor--most of the time she
waits for you or you wait for her, but when
you are both ready, you get together, or
rendezvous

Copyright © 2018 Pearson. All rights reserved. 1-31


Message Passing Rendezvous
• To support concurrent tasks with message
passing, a language needs:

- A mechanism to allow a task to indicate when it


is willing to accept messages

- A way to remember who is waiting to have its


message accepted and some “fair” way of choosing
the next message

• When a sender task’s message is accepted by a


receiver task, the actual message transmission is
called a rendezvous

Copyright © 2018 Pearson. All rights reserved. 1-32

You might also like