University of Balamand
Faculty of Sciences
CSIS 221
Operating Systems
Emilio Chahine
A1411058
Report 8
Concurrency: Mutual Exclusion and Synchronization
Outline:
1. Distributed Processing
a. Definition
2. Concurrency
a. Definition
b. Key Terms Related to Concurrency (Definition of each)
Atomic Operation
Critical Section
Deadlock
Livelock
Mutual Exclusion
1. Definition
2. Requirements for Mutual Exclusion
Race Condition
Starvation
c. Principles of Concurrency (Brief description)
3. Process Interaction
a. Processes unaware of each other
b. Processes indirectly aware of each other
c. Processes directly aware of each other
4. Semaphores
a. Common Concurrency Mechanisms (Definition of each)
Semaphore
Binary Semaphore
Mutex
Condition Variable
Monitor
Event Flag
Mailboxes/Messages
Spinlocks
b. Performed Operations on Semaphores
Semaphore Initialization
semSignal
semWait
c. Mutual Exclusion Using Semaphores (Description)
d. Producer/Consumer Problem
e. Strong vs. Weak Semaphores
5. Monitors
a. Definition
b. Monitor Synchronization
Condition Variables
1. Description
2. Cwait(c)
3. Csignal(c)
6. Producer/Consumer Problem
a. Solution using Semaphores
b. Solution using Monitors
7. Inter-Process Communication
a. Message Passing (Description)
Primitive Functions
1. Send
2. Receive
Message Communication Combinations
1. Blocking Send, Blocking Receive
2. Nonblocking Send, Blocking Receive
3. Nonblocking Send, Nonblocking Receive
b. Process Addressing Modes
Description
Direct Addressing
Indirect Addressing
8. References
1. Distributed Processing
a. Definition: is a computer-networking method in which multiple computers
across different locations share computer-processing capability. This is in
contrast to a single, centralized server managing and providing processing
capability to all connected systems.
2. Concurrency
a. Definition: processing is a computing model in which multiple processors
execute instructions simultaneously for better performance. Concurrent
means something that happens at the same time as something else. ...
Concurrent processing is sometimes said to be synonymous with parallel
processing.
b. Key Terms Related to Concurrency (Definition of each)
Atomic Operation: program operations that run completely
independently of any other processes. They are used in many
modern operating systems and parallel processing systems.
Critical Section: When more than one processes access a same
code segment that segment is known as critical section. Critical
section contains shared variables or resources which are needed to
be synchronized to maintain consistency of data variable.
Deadlock: a specific condition when two or more processes are
each waiting for another to release a resource, or more than two
processes are waiting for resources in a circular chain
Livelock: is similar to a deadlock, except that the states of the
processes involved in the Livelock constantly change with regard
to one another, none progressing.
Mutual Exclusion:
1. Definition:
A way of making sure that if one process is using a
shared modifiable data, the other processes will be
excluded from doing the same thing. ... In this fashion,
each process executing the shared data (variables)
excludes all others from doing so simultaneously. This
is called Mutual Exclusion.
2. Requirements for Mutual Exclusion:
Deadlock = endless waiting due to circular wait
relationships.
Starvation = unbounded waiting due to order of service
policy.
Unfairness = requests are not served in order they are
made.
Fault intolerance = algorithm breaks if processes die or
messages are lost or garbled.
Race Condition: a situation on concurrent programming where
two concurrent threads or processes compete for a resource and the
resulting final state depends on who gets the resource first.
Starvation: a process does not get the resources it needs for a long
time because the resources are being allocated to other processes.
It generally occurs in a Priority based scheduling System.
c. Principles of Concurrency (Brief description):
Interleaving and overlapping
can be viewed as examples of concurrent processing
both present the same problems
Uniprocessor – the relative speed of execution of processes cannot be
predicted
depends on activities of other processes
the way the OS handles interrupts
scheduling policies of the OS
3. Process Interaction
Relationship Influence that One Potential Control
Process Has on the Problems
Other
Processes unaware Competition •Results of one •Mutual exclusion
of each other process independent •Deadlock
of the action of others •Starvation
•Timing of process
may be affected
Processes indirectly Cooperation by sharing •Results of one •Mutual exclusion
aware of each other process may depend •Deadlock
on information •Starvation
obtained from others •Data coherence
•Timing of process
may be affected
Processes directly Cooperation by •Results of one •Deadlock
aware of each other communication process may depend •Starvation
on information
obtained from others
•Timing of process
may be affected
4. Semaphores
a. Common Concurrency Mechanisms (Definition of each)
Semaphore: is a variable or abstract data type used to control
access to a common resource by multiple processes in a concurrent
system such as a multitasking.
Binary Semaphore: semaphores which are restricted to the values 0
and 1 and they are used to implement locks.
Mutex: is a program object that allows multiple program threads to
share the same resource, such as file access, but not
simultaneously.
Condition Variable: synchronization primitives that enable threads
to wait until a particular condition occurs. It are user-mode objects
that cannot be shared across processes.
Monitor: is a synchronization construct that allows threads to have
both mutual exclusion and the ability to wait (block) for a certain
condition to become false.
Event Flag: used when a task needs to synchronize with the
occurrence of multiple events. The task can be synchronized when
any of the events have occurred, which is called disjunctive
synchronization.
Mailboxes/Messages: A previous section covered semaphores
which can be employed to synchronize tasks, thus providing a
mechanism allowing orderly inter-task communication using
global data. A mailbox is a data buffer that can store a fixed
number of messages of a fixed size. Tasks can store messages in
a mailbox.
Spinlocks: a lock which causes a thread trying to acquire it to
simply wait in a loop while repeatedly checking if the lock is
available.
b. Performed Operations on Semaphores
Semaphore Initialization: to initialize the semaphore variable
pointed to by sem to value amount. If the value of pshared is zero,
then the semaphore cannot be shared between processes. If the
value of pshared is nonzero, then the semaphore can be shared
between processes.
Sem_Signal: If there are threads blocked in the sem_wait
() waiting queue for the semaphore, the first thread will be released
to execute. Otherwise the semaphore value will be incremented by
1.
sem_wait: this function locks the semaphore referenced by sem by
performing a semaphore lock operation on that semaphore. If the
semaphore value is currently zero, then the calling thread will not
return from the call to sem_wait () until it either locks the
semaphore or the call is interrupted by a signal.
c. Mutual Exclusion Using Semaphores (Description): they are a sub-category of
all semaphores. They are used to block access to a resource, usually. Start all
the processes and signal the semaphore once. One of the waiting processes
will get to go; then it will signal the semaphore, and another process waiting
will go
d. Producer/Consumer Problem: a classic example of a multi-process
synchronization problem. The problem describes two processes, the producer
and the consumer, who share a common, fixed-size buffer used as a queue.
The producer's job is to generate data, put it into the buffer, and start again.
e. Strong vs. Weak Semaphores:
Strong: the process that has been blocked the longest is released from the
queue first (FIFO)
Weak: the order in which processes are removed from the queue is not
specified
5. Monitors
a. Definition: a monitor is a synchronization construct that allows threads to
have both mutual exclusion and the ability to wait (block) for a certain
condition to become false. ... A monitor consists of a mutex (lock) object and
condition variables.
b. Monitor Synchronization
Condition Variables
1. Description: a container of threads that are waiting for a
certain condition.
2. Cwait(c): Suspend execution of the calling process on
condition c. The monitor is now available for use by
another process.
3. Csignal(c): Resume execution of some process blocked
after a Cwait on the same condition. If there are several such
processes, choose one of them; if there is no such process, do
nothing.
6. Producer/Consumer Problem
a. Solution using Semaphores:
o Imperative that the semWait and semSignal operations be
implemented as atomic primitives
o Can be implemented in hardware or firmware
o Software schemes such as Dekker’s or Peterson’s algorithms can be
used
o Use one of the hardware-supported schemes for mutual exclusion
b. Solution using Monitors:
o Programming language construct that provides equivalent functionality
to that of semaphores and is easier to control
o Implemented in a number of programming languages
including Concurrent Pascal, Pascal-Plus, Modula-2, Modula-3,
and Java
o Has also been implemented as a program library
o Software module consisting of one or more procedures, an
initialization sequence, and local data
7. Inter-Process Communication
a. Message Passing (Description): a type of communication between
processes. Message passing is a form of communication used in parallel
programming and object-oriented programming. Communications are
completed by the sending of messages (functions, signals and data packets) to
recipients.
Primitive Functions
1. Send: The invoking program sends a message to a
process (which may be an actor or object) and relies on
that process and its supporting infrastructure to select
and then run the code it selects.
2. Receive:
the RECEIVE primitive receives a message from a
specified source process.
Message Communication Combinations
1. Blocking Send, Blocking Receive: Both the sender and
receiver are blocked until the message is delivered; this
is sometimes referred to as a rendezvous. This
combination allows for tight synchronization between
processes.
2. Nonblocking Send, Blocking Receive: It allows a
process to send one or more messages to a variety of
destinations as quickly as possible. A process that must
receive a message before it can do useful work needs to
be blocked until such a message arrives.
3. Nonblocking Send, Nonblocking Receive: Neither party
is required to wait. The Nonblocking send is more
natural for many concurrent programming tasks.
b. Process Addressing Modes
Description: an aspect of the instruction set architecture in most
central processing unit (CPU) designs. An addressing
mode specifies how to calculate the effective memory address of
an operand by using information held in registers and/or constants
contained within a machine instruction or elsewhere.
Direct Addressing: a scheme in which the address specifies which
memory word or register contains the operand.
Indirect Addressing: a scheme in which the address specifies
which memory word or register contains not the operand but the
address of the operand.
8. References:
https://www.webopedia.com/TERM/D/distributed_processing.html
https://www.geeksforgeeks.org/g-fact-70/
https://smallbusiness.chron.com/advantages-distributed-data-processing-
26326.htm
http://www.cs.fsu.edu/~xyuan/cop5611/lecture7.html
https://stackoverflow.com/questions/34510/what-is-a-race-condition
https://www.quora.com/What-does-%E2%80%98starvation%E2%80%99-mean-
in-operating-systems
https://searchoracle.techtarget.com/definition/concurrent-processing
https://en.wikipedia.org/wiki/Monitor_(synchronization)
https://en.wikipedia.org/wiki/Semaphore_(programming)
https://www.webopedia.com/TERM/M/mutex.html
https://doc.micrium.com/display/osiiidoc/Event+Flags
http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/mutualExclu.htm
http://www.on-time.com/rtos-32-docs/rtkernel-32/programming-
manual/module/mailbox.htm
https://www.definitions.net/definition/spinlock
http://www.logicio.com/HTML/semsignal.htm
https://smallbusiness.chron.com/advantages-distributed-data-processing-
26326.html
https://searchoracle.techtarget.com/definition/concurrent-processing
http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/mutualExclu.htm