Unit 2
Unit 2
Transparency
Scalability
Scalability is a critical feature that ensures IPC mechanisms can handle an increasing
number of processes and communication demands without significant performance
degradation. Effective IPC systems must efficiently manage resources and maintain
performance as the system grows.
Fault Tolerance
Fault tolerance is vital for ensuring that communication can continue even in the face
of failures. Techniques like message retries, replication, and alternative routing paths
help maintain reliable communication and system robustness.
Latency and bandwidth considerations are crucial for ensuring timely and efficient data
transfer. Distributed systems often operate over wide-area networks where these
factors can vary significantly. Minimizing latency and optimizing bandwidth usage are
essential for maintaining high performance.
Security
Synchronization
Synchronization mechanisms ensure that processes can coordinate their actions and
maintain consistency, especially in operations that require multiple processes to work
together. Effective synchronization prevents race conditions and ensures coherent
system behavior.
Heterogeneity
Heterogeneity must be addressed as distributed systems often involve different
hardware, operating systems, and network protocols. IPC mechanisms need to be
adaptable and interoperable across diverse environments to ensure seamless
communication between heterogeneous components.
1.B) Explain the client server communication model. Also Discuss about
marshaling in detail.
o The application programs using the client-server model should follow the given
below strategies:
o A server provides a service for many clients not just for a single client. Therefore,
we can say that client-server follows the many-to-one relationship. Many clients
can use the service of one server.
o Services are required frequently, and many users have a specific client-server
application program. For example, the client-server application program allows
the user to access the files, send e-mail, and so on. If the services are more
customized, then we should have one generic application program that allows
the user to access the services available on the remote computer.
Client
A client is a program that runs on the local machine requesting service from the server.
A client program is a finite program means that the service started by the user and
terminates when the service is completed.
Server
A server is a program that runs on the remote machine providing services to the clients.
When the client requests for a service, then the server opens the door for the incoming
requests, but it never initiates the service.
Marshalling:
Marshalling in distributed systems involves transforming data structures or objects
into a format suitable for transmission over a network and then reconstructing them
at the destination. Here's a breakdown of how marshalling works and why it's essential:
What is Marshalling?
Marshalling is the process of converting data structures or objects into a format that
can be easily transmitted between different components in a distributed system. This
typically involves:
Approaches to Marshalling
Components:
Example: A Person struct with fields name, place, and year would be marshalled
according to CDR rules.
• Example: An object with fields name, place, and year is serialized into
bytes and can be deserialized back into the original object.
<Person>
<name>John</name>
<place>NY</place>
<year>1990</year>
</Person>.
2.A) Illustrate the necessary and sufficient conditions for causal ordering.
For example, if send(M1) -> send(M2) then every recipient of both the messages M1
and M2 must receive the message M1 before receiving the message M2. In Distributed
Systems the causal ordering of messages is not automatically guaranteed.
3. Failure of a system.
Both protocol’s algorithm requires that the messages be delivered reliably and both
prefer that there is no network partitioning between the systems. The general idea of
both protocols is to deliver a message to a process only if the message immediately
preceding it has been delivered to the process. Otherwise, the message is not delivered
immediately instead it is stored in a buffer memory until the message preceding it has
been delivered.
1. Causal Relationship
Necessary Condition: For two events e1e1 and e2e2 in a distributed system, if e1e1
causally affects e2e2, then e1e1 must occur before e2e2 in any causal ordering.
2. Happens-Before Relationship
• e1e1 and e2e2 are in the same process, and e1e1 occurs before e2e2.
• e1e1 is the sending of a message, and e2e2 is the receipt of that message in the
same process.
Necessary Condition: Causal ordering must be consistent across all processes in the
distributed system. This means that all processes should agree on the order of events
based on their causal relationships.
4. No Causality Violations
Necessary Condition: A causal ordering must not violate causality. If two events are
causally related, their order should not be inverted. For example, if e1e1 causally affects
e2e2, then e2e2 should not precede e1e1 in the ordering.
Sufficient Condition: An ordering that maintains the correct sequence of causally
related events and respects the happens-before relationship is free from causality
violations.
Example
• Consistency: Any valid ordering must reflect that A happens before B, and B
happens before C if C depends on B.
2.B) What are sockets? How does they used in IPC in distributed systems
2. Client-Server Model:
3. Data Transmission:
4. Distributed Systems:
5. Concurrency:
Sockets are crucial for network communication, enabling applications to interact with
each other and coordinate tasks effectively across different systems.
3.A) Discuss the issues relating to datagram communication.
Datagram communication in distributed systems, while offering simplicity and
efficiency, introduces several issues that can impact reliability and consistency.
Datagram communication uses packets, or datagrams, to send data across a network
without establishing a connection.
1. Pipes (Same Process) – This allows flow of data in one direction only.
Analogous to simplex systems (Keyboard). Data from the output is usually
buffered until input process receives it which must have a common origin.
2. Names Pipes (Different Processes) – This is a pipe with a specific name it can
be used in processes that don’t have a shared common process origin. E.g. is
FIFO where the details written to a pipe is first named.
5. Shared memory – This allows the interchange of data through a defined area
of memory. Semaphore values have to be obtained before data can get access
to shared memory.
Inter process communication (IPC) refers to the mechanisms and techniques used by
operating systems to allow different processes to communicate with each other. There
are several methods of IPC, each with its own advantages and disadvantages.
Broadcasting updates,
distributing notifications, real- Querying databases, accessing web
Use Case time data sharing services, retrieving files
comparison between TCP (Transmission Control Protocol) and UDP (User Datagram
Protocol)
Connection-oriented;
establishes a connection before Connectionless; no connection
Connection data transfer establishment before data transfer
Uses flow control to prevent No flow control; sender can send data
Flow Control overwhelming the receiver at any rate