Lecture 05
Lecture 05
the values of shared variables between threads depend on the logic of the
program
we can have a problem of shared data corruption (compromising data
coherency) due to:
- threads executing in fake concurrency, (thread interrupted and cpu assigned
to another one)
- threads executing in parallel (simultaneous execution on separate cpus/cores)
- these threads share some data
if this is allowed the system may reach an incorrect state.
definition
Race condition: when multiple threads are running in real/fake
concurrency and they share at least one common variable AND the
outcome of the execution depends on the order in which the threads
modify the shared variable.
solution → ensuring only one thread at a time can manipulate it among the
concurrent threads
Process Synchronization:
definition
Critical Section
definition
Program()
{
[remainder section]
entry section -> here the process requests access to
critical section from the os to grant it
Synchronization mechanisms:
Mutex Locks:
acquire(M){
while(!M);
M=false;
}
release(M){
M=true;
}
Program(){
acquire(M);
critical section;
release(M);
}
mutual exclusive access to CS may violated if context switching occurs when the
processes are acquiring the lock (after while(!M) and before M=false )
the primitives must be atomic = one acquire should be running for a given
Mutex at a time in the entire system.
mutex locks (spinlocks) create busy-waiting (spinning, wastes cpu cycles) →
mutex useful when CS is short (→ short spin time + no context-switching)
spinning processes get interrupted giving the change for the lock holder to
release it.
example on deadlock:
forgetting to release (bad coding)
(uniprocessor systems): if ISR(interrupt handler ) tries to acquire but the
lock isn't available, we cannot interrupt the ISR while it's spinning to
release the lock → system gets stuck
Atomic Implementation
Program(){
//remainder section
while(test_and_set(&lock)==0);
//critical section
lock=1;
//remainder section
}
if 2 cpus execute test_and_set at the same time, hardware ensures that the 2
function calls happen atomically+sequentially.
hardware provides XCHG instruction to write into memory location and return old
val in uninterrupted way. ex: Mov AX,0 then XCHG Ax, [0x0E3045] , Logical
representation:
Program(){
//remainder section
while(XCHG(&lock, 0)==0);
//critical section
lock=1;
//remainder section
}
Semaphores:
Waiting style:
M initialized to 0, one thread/process stuck when trying to acquire the lock, other
runs and releases M, then the first would be able to acquire it and run.
better implementation
P(S){
S=S-1;
if(s<0)
{
block and place P in Q;
}
}
V(S){
S=S+1;
if (S<=0)
{
Wake up P from head of Q;
}
}
P(S){
if(S==1) s=0;
else {Block and place P in Q;}
}
V(S){
if (Q=empty) S=1;
else wake up P from Q;
// each proces, when resumed will STRAIGHTFORWARDLY start
executing its CS
}
Peterson's algorithm
Pi(
do {
Interested[i]=true;
turn = j;
while (Interested[j] && turn==j);
CS
Interested[i]=false;
. . .
} while(true);
)