Excercise Solution 3-5
Excercise Solution 3-5
A)
Assumptions: Message was received at server and server sends ACK message.
Q 3.5 Assume that a distributed system is susceptible to server failure. What mechanisms would be
required to guarantee the “exactly once” semantics for execution of RPCs?
Have a mirror configuration with a hot swap. By maintaining the history of requests (which includes
the client, request timestamp and request status) we can respond back to any client in a manner that
adheres to the “exactly once” semantic.
Q 3.6 Describe the differences among short-term, medium-term and long-term scheduling.
Short Term Scheduling: Selects from processes(already in memory) that are ready to execute.
Selects a new process frequently approximately once every 100ms.
Medium Term Scheduling: Swaps processes out and in assisting in reducing the amount of
multiprocessing. This scheduler adapts itself to changes in memory requirements and improves
process mix.
Long Term scheduling: Selects from processes on a mass storage device. Executes much less
frequently. Usually minutes between processes.
Q 3.11 Give an example of a situation in which ordinary pipes are more suitable than named pipes and
an example of a situation where names pipes are more suitable than ordinary pipes.
Named pipes can be used to listen to requests from other processes( similar to TCP IP ports). If the
calling processes are aware of the name, they can send requests to this. Unnamed pipes cannot be used
for this purpose.
Ordinary pipes are useful in situations where the communication needs to happen only between two
specified process, known beforehand. Named pipes in such a scenario would involve too much of an
overhead in such a scenario.
Q 3.12 Consider the RPC mechanism. Describe the undesirable consequences that could arise from
not implementing the “at most once” or the “exactly once” semantics. Describe possible uses for a
mechanism that has neither of these guarantees.
An RPC call could be executed multiple times or could never be executed depending on the receipt of
the acknowledgement. The same operation will be repeated multiple times which may place the
system in an unstable state.
Such a mechanism can be used in places where only read only operations need to be performed. As
this kind of transactions do not cause a change in state, multiple incurrences will not affect the
operation of the system.
A) 5
Q 3.14 What are the benefits and disadvantages of each of the following?
a. Synchronous and Asynchronous communication
Synchronous communication is trivial as the sender and receiver are blocked and hence there is
only one communication at a time. However, the processes are blocked until the communication is
complete and cannot perform any other operation. In asynchronous communication, the processes
need to handle multiple communications and hence need mechanisms to identify the messages, store
and respond to each message individually. Also, this requires mechanism to identify when a new
message has arrived and break from regular path to handle the new message. However, asynchronous
communication has the advantage that process flow is not obstructed when waiting for a response.
In explicit buffering, the system fixes the amount of buffer. The system implementation is
trivial and memory usage is efficient. However, dynamically increasing the buffer size is not possible.
At most n messages can reside in it. After that, the sender has to block. Hence the requirements need
to be known beforehand.
In Send by Reference, an internal pointer to the original variable is sent. This allows modification of
the variable from the callee.
Variable size messages: System implementation is complex as the data structures need to grow
dynamically. The programming, however, is easier as the programmer need not bother about the size
of the message.
Q 4.2 What are two differences between user level threads and kernel level threads? Under what
circumstances is one type better than the other?
User threads are managed without kernel support whereas kernel threads are managed by the kernel.
User threads are associated with a process. Kernel threads may be associated with user threads.
Every user thread needs to be mapped to a kernel thread(1:1, n:1 or n:m)
Q 4.3 Describe the action the kernel takes to context-switch between kernel level threads.
Save state of CPU registers, preserve address space. Load CPU registers for new thread.
Q 4.7 Provide two programming examples in which multithreading does not perform better than a
single threaded solution
Q 4.10 Which of the following components of program are shared across threads in a multithreaded
process?
Heap Memory
Global Variables
Q 4.12
Design is much simpler as there is no need to handle threads and processes separately
There is more overhead when using threads in Linux - defeating the concept of threads.
However, the advantage that Linux offers is that the sharing of resources between threads can be
controlled.
Q 4.13 The program shown in Figure 4.14 uses Pthreads API. What would be the output from the
program at LINE C and LINE P?
C = 0; P = 5
Q 4.14
Q 5.3
a. [(8 - 0) + (12 - 0.4) + (13 - 1) ] /3 = 10.533
b. Process 1 starts. No preemptive . Hence 1 has to complete. Process 2 and 3 are in ready. When
process 1 finishes at 8 units, process 3 starts and finishes. P2 starts at 9 units.
Q 5.12
P1 P2 P3 P4 P5
P2 P4 P3 P5 P1
P2 P5 P1 P3 P4
P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5 P1 P1 P1 P1 P1
b. turnaround time
FCFS SJF NPP RR
P1 10 19 16 19
P2 11 1 1 2
P3 13 4 18 7
P4 14 2 19 4
P5 19 9 6 14
c. Waiting Time:
Q 5.14
a. The process gets scheduled twice the number of times and gets completed twice as fast. The
turnaround time and waiting time are reduced. But ofcourse, if this were repeated for all processes, it
would have no impact.
b. Provides opportunities for a different mechanism of finishing tasks earlier without increasing the
priority. Does not increase the risk of starvation.
Causes other processes to be delayed. When there are no other jobs to be scheduled, increases the
overhead unnecessarily as the context will be switched despite there being only one process.
c. Implement adaptive quantums so that processes which need higher priority are given more time
while the other ‘regular’ processes are given lesser time. This saves the overhead discussed above.
Q 5.21
P1 = (40 /2 ) + 60 = 80
P2 = (18/2) + 60 = 69
P3 = (10/2) + 60 = 65