Operating Systems
CSE-2008
Dr Sunil Kumar Singh
Assistant Professor
School - SCOPE
VIT-AP Amaravati
[email protected] Cabin - Room-223 (CB)
1
Module No. 3
Process Synchronization
1 Process Synchronization
2 The Critical-Section Problem
3 Peterson’s Solution
4 Synchronization Hardware
5 Semaphores
6 Classic Problems of Synchronization
7 Monitors
8 Synchronization Examples
9 Atomic Transactions
2
Process Synchronization
• Process Synchronization is the task of coordinating the execution of
processes in a way that no two processes can have access to the
same shared data and resources.
• It is specially needed in a multi-process system when multiple
processes are running together, and more than one processes try to
gain access to the same shared resource or data at the same time.
• This can lead to the inconsistency of shared data. So the change
made by one process not necessarily reflected when other processes
accessed the same shared data. To avoid this type of inconsistency of
data, the processes need to be synchronized with each other.
3
Process Synchronization
4
Race Condition
• When more than one process is either running the same
code or modifying the same memory or any shared data,
there is a risk that the result or value of the shared data
may be incorrect because all processes try to access and
modify this shared resource.
• Thus, all the processes race to say that my result is correct.
This condition is called the race condition. Since many
processes use the same data, the results of the processes
may depend on the order of their execution.
5
Critical-Section Problem
• Critical Section is the part of a program which tries to
access shared resources. That resource may be any
resource in a computer like a memory location, Data
structure, CPU or any IO device.
• The critical section cannot be executed by more than one
process at the same time; operating system faces the
difficulties in allowing and disallowing the processes from
entering the critical section.
• The critical section problem is used to design a set of
protocols which can ensure that the Race condition
among the processes will never arise.
6
Critical-Section
7
Critical section
Let us look at different elements/sections of a program:
• Entry Section: The entry Section decides the entry of a
process.
• Critical Section: Critical section allows and makes sure
that only one process is modifying the shared data.
• Exit Section: The entry of other processes in the shared
data after the execution of one process is handled by the
Exit section.
• Remainder Section: The remaining part of the code
which is not categorized as above is contained in the
Remainder section.
8
Requirements of Synchronization
Primary
mechanisms
Mutual Exclusion
Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process is
executing inside critical section then the other process must not enter in the critical section.
9
Requirements of Synchronization
Progress
mechanisms
• Progress means that if one process doesn't need to
execute into critical section then it should not stop other
processes to get into the critical section.
Secondary
Bounded Waiting
• We should be able to predict the waiting time for every
process to get into the critical section. The process must
not be endlessly waiting for getting into the critical
section.
Architectural Neutrality
• Our mechanism must be architectural natural. It means
that if our solution is working fine on one architecture
then it should also run on the other ones as well.
10
Solutions To The Critical Section
Some widely used methods to solve the
critical section problem.
• Peterson Solution
• Synchronization Hardware
• Mutex Locks
• Semaphore Solution
11
Peterson Solution
• Peterson’s solution is a classic software based solution to
critical section problem.
• It may not work correctly in modern computer
architectures.
• In this solution, when a process is executing in a critical
state, then the other process only executes the rest of the
code, and the opposite can happen. This method also helps
to make sure that only a single process runs in the critical
section at a specific time.
12
13
Two Processes Executing
concurrently
14
Peterson Solution
• To prove the method is a solution for the critical-section
problem, we need to show: Mutual exclusion is preserved.
• Pi enters its critical section only if either
flag[j]==false or turn==i.
• If both processes want to enter their critical sections at the
same time,
then flag[i] == flag[j] == true.
• However, the value of turn can be either 0 or 1 but cannot
be both.
Hence, one of the processes must have successfully
executed the while statement (to enter its critical section), and
the other process has to wait, till the process leaves its critical
section.
• mutual exclusion is preserved.
15
Peterson Solution
The progress requirement is satisfied.
• Case 1:
Pi is ready to enter its critical section.
If Pj is not ready to enter the critical section (it is in the
remainder section).
Then flag[j] == false, and Pi can enter its critical section.
• Case 2:
Pi and Pj are both ready to enter its critical section.
flag[i] == flag[j] == true.
Either turn == i or turn == j.
If turn == i, then Pi will enter the critical section.
If turn == j, then Pj will enter the critical section.
16
Peterson Solution
The bounded-waiting requirement is met.
• Once Pj exits its critical section, it will reset flag[j] to false,
allowing Pi to enter its critical section.
• Even if Pj immediately resets flag[j] to true, it must also
set turn to i.
• Then, Pi will enter the critical section after at most one
entry by Pj.
17
18
LOCK Variable
19
synchronization hardware
20
21
Turn Variables (Strict Alteration)
22
Semaphores
Semaphores are integer variables, their value acts as a signal,
which allows or does not allow a process to access the critical
section of code or certain other resources.
There are mainly two types of Semaphores, or two types of signaling
integer variables:
23
Semaphores
24
Characteristics of Semaphores
• Used to provide Mutual Exclusion. (Binary)
• Used to control access to resources. (Counting)
• Solution using semaphore can lead to have deadlock
• Solution using semaphore can lead to have Starvation
• Solution using semaphore can be busy waiting solution
• Semaphores may lead to a priority inversion
• Semaphores are machine-independent.
25
Critical Section Solution
26
Questions on Semaphore
Q.1
A counting semaphore S is initialized to 10. Then, 6 P operations and 4 V
operations are performed on S. What is the final value of S?
Solution:
We know-
• P operation also called as wait operation decrements the value of semaphore
variable by 1.
• V operation also called as signal operation increments the value of semaphore
variable by 1.
Thus,
Final value of semaphore variable S
= 10 – (6 x 1) + (4 x 1)
= 10 – 6 + 4
27
Questions on Semaphore
Q.2
A counting semaphore S is initialized to 7. Then, 20 P operations and 15 V
operations are performed on S. What is the final value of S?
Solution:
Thus,
Final value of semaphore variable S
= 7 – (20 x 1) + (15 x 1)
= 7 – 20 + 15
=2
28
Questions on Semaphore
29
Questions on Semaphore
30
Questions on Semaphore
31
Classical problems of
synchronization
The classical problems of synchronization are as follows:
• Bound-Buffer problem
• Sleeping barber problem
• Dining Philosophers problem
• Readers and writers problem
32
Bound-Buffer problem
• Also known as the Producer-Consumer problem. In this
problem, there is a buffer of n slots, and each buffer is
capable of storing one unit of data. There are two processes
that are operating on the buffer – Producer and Consumer.
The producer tries to insert data and the consumer tries to
remove data.
• If the processes are run simultaneously they will not yield
the expected output.
• The solution to this problem is creating two semaphores,
one full and the other empty to keep a track of the
concurrent processes.
33
Bound-Buffer problem
34
35
Bound-Buffer problem
36
Readers and writers problem
37
Pros and Cons of Batch OS
38
39
40
41
42
Thank You
43