Evolution of Distributed Computing Systems
Last Updated :
22 Nov, 2022
In this article, we will see the history of distributed computing systems from the mainframe era to the current day to the best of my knowledge. It is important to understand the history of anything in order to track how far we progressed. The distributed computing system is all about evolution from centralization to decentralization, it depicts how the centralized systems evolved from time to time towards decentralization. We had a centralized system like mainframe in early 1955 but now we are probably using a decentralized system like edge computing and containers.
1. Mainframe: In the early years of computing between 1960-1967, mainframe-based computing machines were considered as the best solution for processing large-scale data as they provided time-sharing to a local clients who interacts with teletype terminals. This type of system conceptualized the client-server architecture. The client connects and request the server and the server processes these request, enabling a single time-sharing system to send multiple resources over a single medium amongst clients. The major drawback it faced was that it was quite expensive and that lead to the innovation of early disk-based storage and transistor memory.
2. Cluster Networks: In the early 1970s, the development of packet-switching and cluster computing happens which was considered an alternative for mainframe systems although it was expensive. In cluster computing, the underlying hardware consists of a collection of similar workstations or PCs, closely connected by means of a high-speed local-area network where each node runs the same operating system. Its purpose was to achieve parallelism. During 1967-1974, we also saw the creation of ARPANET and an early network that enabled global message exchange allowing for services hostable on remote machines across geographic bounds independent from a fixed programming model. TCP/IP protocol that facilitated datagram and stream-orientated communication over a packet-switched autonomous network of networks also came into existence. Communication was mainly through datagram transport.
3. Internet & PC's: During this era, the evolution of the internet takes place. New technology such as TCP/IP had begun to transform the Internet into several connected networks, linking local networks to the wider Internet. Thus, the number of hosts connected to the network began to grow rapidly, therefore the centralized naming systems such as HOSTS.TXT couldn't provide scalability. Hence Domain Name Systems (DNSs) came into existence in 1985 and were able to transform hosts' domain names into IP addresses. Early GUI-based computers utilizing WIMP(windows, icons, menus, pointers) were developed which provided feasibility of computing within the home, providing applications such as video games and web browsing to consumers.
4. World Wide Web: During the 1980 - the 1990s, the creation of HyperText Transfer Protocol (HTTP) and HyperText Markup Language (HTML) resulted in the first web browsers, websites,s, and web-server. It was developed by Tim Berners Lee at CERN. Standardization of TCP/IP provided infrastructure for interconnected networks of networks known as the World Wide Web (WWW). This leads to the tremendous growth of the number of hosts connected to the Internet. As the number of PC-based application programs running on independent machines started growing, the communications between such application programs became extremely complex and added a growing challenge in the aspect of application-to-application interaction. With the advent of Network computing which enables remote procedure calls (RPCs) over TCP/IP, it turned out to be a widely accepted way for application software communication. In this era, Servers provide resources described by Uniform Resource Locators. Software applications running on a variety of hardware platforms, OS, and different networks faced challenges when required to communicate with each other and share data. These demanding challenges lead to the concept of distributed computing applications.
5. P2P, Grids & Web Services: Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers without the requirement of a central coordinator. Peers share equal privileges. In a P2P network, each client acts as a client and server.P2P file sharing was introduced in 1999 when American college student Shawn Fanning created the music-sharing service Napster.P2P networking enables decentralized internet. With the introduction of Grid computing, multiple tasks can be completed by computers jointly connected over a network. It basically makes use of a data grid i.e., a set of computers can directly interact with each other to perform similar tasks by using middleware. During 1994 - 2000, we also saw the creation of effective x86 virtualization. With the introduction of web service, platform-independent communication was established which uses XML-based information exchange systems that use the Internet for direct application-to-application interaction. Through web services Java can talk with Perl; Windows applications can talk with Unix applications. Peer-to-peer networks are often created by collections of 12 or fewer machines. All of these computers use unique security to keep their data, but they also share data with every other node. In peer-to-peer networks, the nodes both consume and produce resources. Therefore, as the number of nodes grows, so does the peer-to-peer network's capability for resource sharing. This is distinct from client-server networks where an increase in nodes causes the server to become overloaded. It is challenging to give nodes in peer-to-peer networks proper security because they function as both clients and servers. A denial of service attack may result from this. The majority of contemporary operating systems, including Windows and Mac OS, come with software to implement peer
6. Cloud, Mobile & IoT: Cloud computing came up with the convergence of cluster technology, virtualization, and middleware. Through cloud computing, you can manage your resources and applications online over the internet without explicitly building on your hard drive or server. The major advantage is provided that it can be accessed by anyone from anywhere in the world. Many cloud providers offer subscription-based services. After paying for a subscription, customers can access all the computing resources they need. Customers no longer need to update outdated servers, buy hard drives when they run out of storage, install software updates or buy a software licenses. The vendor does all that for them. Mobile computing allows us to transmit data, such as voice, and video over a wireless network. We no longer need to connect our mobile phones with switches. Some of the most common forms of mobile computing is a smart cards, smartphones, and tablets. IoT also began to emerge from mobile computing and with the utilization of sensors, processing ability, software, and other technologies that connect and exchange data with other devices and systems over the Internet.
The evolution of Application Programming Interface (API) based communication over the REST model was needed to implement scalability, flexibility, portability, caching, and security. Instead of implementing these capabilities at each and every API separately, there came the requirement to have a common component to apply these features on top of the API. This requirement leads the API management platform evolution and today it has become one of the core features of any distributed system. Instead of considering one computer as one computer, the idea to have multiple systems within one computer came into existence. This leads to the idea of virtual machines where the same computer can act as multiple computers and run them all in parallel. Even though this was a good enough idea, it was not the best option when it comes to resource utilization of the host computer. The various virtualization available today are VM Ware Workstation, Microsoft Hyper-V, and Oracle Virtualization.
7. Fog and Edge Computing: When the data produced by mobile computing and IoT services started to grow tremendously, collecting and processing millions of data in real-time was still an issue. This leads to the concept of edge computing in which client data is processed at the periphery of the network, it's all about the matter of location. That data is moved across a WAN such as the internet, processed, and analyzed closer to the point such as corporate LAN, where it's created instead of the centralized data center which may cause latency issues. Fog computing greatly reduces the need for bandwidth by not sending every bit of information over cloud channels, and instead aggregating it at certain access points. This type of distributed strategy lowers costs and improves efficiencies. Companies like IBM are the driving force behind fog computing. The composition of Fog and Edge computing further extends the Cloud computing model away from centralized stakeholders to decentralized multi-stakeholder systems which are capable of providing ultra-low service response times, and increased aggregate bandwidths.
The idea of using a container becomes prominent when you can put your application and all the relevant dependencies into a container image that can be run on any environment which has a host operating system that can run containers. This concept became more popular and improved a lot with the introduction of container-based application deployment. Containers can act as same as virtual machines without having the overhead of a separate operating system. Docker and Kubernetes are the two most popular container building platforms. They provide the facility to run in large clusters and communication between services running on containers.
Today distributed system is programmed by application programmers while the underlying infrastructure management is done by a cloud provider. This is the current state of distributed systems of computing and it keeps on evolving.
Similar Reads
Distributed Systems Tutorial A distributed system is a system of multiple nodes that are physically separated but linked together using the network. Each of these nodes includes a small amount of the distributed operating system software. Every node in this system communicates and shares resources with each other and handles pr
8 min read
Introduction to Distributed System
What is a Distributed System?A distributed system is a collection of independent computers that appear to the users of the system as a single coherent system. These computers or nodes work together, communicate over a network, and coordinate their activities to achieve a common goal by sharing resources, data, and tasks.Table o
7 min read
Features of Distributed Operating SystemA Distributed Operating System manages a network of independent computers as a unified system, providing transparency, fault tolerance, and efficient resource management. It integrates multiple machines to appear as a single coherent entity, handling complex communication, coordination, and scalabil
9 min read
Evolution of Distributed Computing SystemsIn this article, we will see the history of distributed computing systems from the mainframe era to the current day to the best of my knowledge. It is important to understand the history of anything in order to track how far we progressed. The distributed computing system is all about evolution from
8 min read
Types of Transparency in Distributed SystemIn distributed systems, transparency plays a pivotal role in abstracting complexities and enhancing user experience by hiding system intricacies. This article explores various types of transparencyâranging from location and access to failure and securityâessential for seamless operation and efficien
6 min read
What is Scalable System in Distributed System?In distributed systems, a scalable system refers to the ability of a networked architecture to handle increasing amounts of work or expand to accommodate growth without compromising performance or reliability. Scalability ensures that as demand growsâwhether in terms of user load, data volume, or tr
10 min read
Middleware in Distributed SystemIn distributed systems, middleware is a software component that provides services between two or more applications and can be used by them. Middleware can be thought of as an application that sits between two separate applications and provides service to both. In this article, we will see a role of
7 min read
Difference between Hardware and MiddlewareHardware and Middleware are both parts of a Computer. Hardware is the combination of physical components in a computer system that perform various tasks such as input, output, processing, and many more. Middleware is the part of software that is the communication medium between application and opera
4 min read
What is Groupware in Distributed System?Groupware in distributed systems refers to software designed to support collaborative activities among geographically dispersed users, enhancing communication, coordination, and productivity across diverse and distributed environments.Groupware in Distributed SystemImportant Topics for Groupware in
6 min read
Difference between Parallel Computing and Distributed ComputingIntroductionParallel Computing and Distributed Computing are two important models of computing that have important roles in todayâs high-performance computing. Both are designed to perform a large number of calculations breaking down the processes into several parallel tasks; however, they differ in
5 min read
Difference between Loosely Coupled and Tightly Coupled Multiprocessor SystemWhen it comes to multiprocessor system architecture, there is a very fine line between loosely coupled and tightly coupled systems, and this is why that difference is very important when choosing an architecture for a specific system. A multiprocessor system is a system in which there are two or mor
5 min read
Design Issues of Distributed SystemDistributed systems are used in many real-world applications today, ranging from social media platforms to cloud storage services. They provide the ability to scale up resources as needed, ensure data is available even when a computer fails, and allow users to access services from anywhere. However,
8 min read
Introduction to Distributed Computing Environment (DCE)The Benefits of Distributed Systems have been widely recognized. They are due to their ability to Scale, Reliability, Performance, Flexibility, Transparency, Resource-sharing, Geo-distribution, etc. In order to use the advantages of Distributed Systems, appropriate support and environment are needed
3 min read
Limitations of Distributed SystemsDistributed systems are essential for modern computing, providing scalability and resource sharing. However, they face limitations such as complexity in management, performance bottlenecks, consistency issues, and security vulnerabilities. Understanding these challenges is crucial for designing robu
8 min read
Various Failures in Distributed SystemDSM implements distributed systems shared memory model in an exceedingly distributed system, that hasnât any physically shared memory. The shared model provides a virtual address space shared between any numbers of nodes. The DSM system hides the remote communication mechanism from the appliance aut
3 min read
Types of Operating SystemsOperating Systems can be categorized according to different criteria like whether an operating system is for mobile devices (examples Android and iOS) or desktop (examples Windows and Linux). Here, we are going to classify based on functionalities an operating system provides.8 Main Operating System
11 min read
Types of Distributed SystemPre-requisites: Distributed System A Distributed System is a Network of Machines that can exchange information with each other through Message-passing. It can be very useful as it helps in resource sharing. It enables computers to coordinate their activities and to share the resources of the system
8 min read
Centralized vs. Decentralized vs. Distributed SystemsUnderstanding the architecture of systems is crucial for designing efficient and effective solutions. Centralized, decentralized, and distributed systems each offer unique advantages and challenges. Centralized systems rely on a single point of control, providing simplicity but risking a single poin
8 min read
Three-Tier Client Server Architecture in Distributed SystemThe Three-Tier Client-Server Architecture divides systems into presentation, application, and data layers, increasing scalability, maintainability, and efficiency. By separating the concerns, this model optimizes resource management and allows for independent scaling and updates, making it a popular
7 min read
Communication in Distributed Systems
Remote Procedure Calls in Distributed System
What is Remote Procedural Call (RPC) Mechanism in Distributed System?A remote Procedure Call (RPC) is a protocol in distributed systems that allows a client to execute functions on a remote server as if they were local. RPC simplifies network communication by abstracting the complexities, making it easier to develop and integrate distributed applications efficiently.
9 min read
Distributed System - Transparency of RPCRPC is an effective mechanism for building client-server systems that are distributed. RPC enhances the power and ease of programming of the client/server computing concept. A transparent RPC is one in which programmers can not tell the difference between local and remote procedure calls. The most d
3 min read
Stub Generation in Distributed SystemA stub is a piece of code that translates parameters sent between the client and server during a remote procedure call in distributed computing. An RPC's main purpose is to allow a local computer (client) to call procedures on another computer remotely (server) because the client and server utilize
3 min read
Marshalling in Distributed SystemA Distributed system consists of numerous components located on different machines that communicate and coordinate operations to seem like a single system to the end-user.External Data Representation:Data structures are used to represent the information held in running applications. The information
9 min read
Server Management in Distributed SystemEffective server management in distributed systems is crucial for ensuring performance, reliability, and scalability. This article explores strategies and best practices for managing servers across diverse environments, focusing on configuration, monitoring, and maintenance to optimize the operation
12 min read
Distributed System - Parameter Passing Semantics in RPCA Distributed System is a Network of Machines that can exchange information with each other through Message-passing. It can be very useful as it helps in resource sharing. In this article, we will go through the various Parameter Passing Semantics in RPC in distributed Systems in detail. Parameter P
4 min read
Distributed System - Call Semantics in RPCThis article will go through the Call Semantics, its types, and the issues in RPC in distributed systems in detail. RPC has the same semantics as a local procedure call, the calling process calls the procedure, gives inputs to it, and then waits while it executes. When the procedure is finished, it
3 min read
Communication Protocols For RPCsThis article will go through the concept of Communication protocols for Remote Procedure Calls (RPCs) in Distributed Systems in detail. Communication Protocols for Remote Procedure Calls:The following are the communication protocols that are used: Request ProtocolRequest/Reply ProtocolThe Request/Re
5 min read
Client-Server ModelThe Client-Server Model is a distributed application architecture that divides tasks or workloads between servers (providers of resources or services) and clients (requesters of those services). In this model, a client sends a request to a server for data, which is typically processed on the server
6 min read
Lightweight Remote Procedure Call in Distributed SystemLightweight Remote Procedure Call is a communication facility designed and optimized for cross-domain communications in microkernel operating systems. For achieving better performance than conventional RPC systems, LRPC uses the following four techniques: simple control transfer, simple data transfe
5 min read
Difference Between RMI and DCOMIn this article, we will see differences between Remote Method Invocation(RMI) and Distributed Component Object Model(DCOM). Before getting into the differences, let us first understand what each of them actually means. RMI applications offer two separate programs, a server, and a client. There are
2 min read
Difference between RPC and RMIRPC stands for Remote Procedure Call which supports procedural programming. It's almost like an IPC mechanism wherever the software permits the processes to manage shared information Associated with an environment wherever completely different processes area unit death penalty on separate systems an
2 min read
Synchronization in Distributed System
Synchronization in Distributed SystemsSynchronization in distributed systems is crucial for ensuring consistency, coordination, and cooperation among distributed components. It addresses the challenges of maintaining data consistency, managing concurrent processes, and achieving coherent system behavior across different nodes in a netwo
11 min read
Logical Clock in Distributed SystemIn distributed systems, ensuring synchronized events across multiple nodes is crucial for consistency and reliability. Enter logical clocks, a fundamental concept that orchestrates event ordering without relying on physical time. By assigning logical timestamps to events, these clocks enable systems
10 min read
Lamport's Algorithm for Mutual Exclusion in Distributed SystemPrerequisite: Mutual exclusion in distributed systems Lamport's Distributed Mutual Exclusion Algorithm is a permission based algorithm proposed by Lamport as an illustration of his synchronization scheme for distributed systems. In permission based timestamp is used to order critical section request
5 min read
Vector Clocks in Distributed SystemsVector clocks are a basic idea in distributed systems to track the partial ordering of events and preserve causality across various nodes. Vector clocks, in contrast to conventional timestamps, offer a means of establishing the sequence of events even when there is no world clock, which makes them e
10 min read
Event Ordering in Distributed SystemIn this article, we will look at how we can analyze the ordering of events in a distributed system. As we know a distributed system is a collection of processes that are separated in space and which can communicate with each other only by exchanging messages this could be processed on separate compu
4 min read
Mutual exclusion in distributed systemMutual exclusion is a concurrency control property which is introduced to prevent race conditions. It is the requirement that a process can not enter its critical section while another concurrent process is currently present or executing in its critical section i.e only one process is allowed to exe
5 min read
Performance Metrics For Mutual Exclusion AlgorithmMutual exclusion is a program object that refers to the requirement of satisfying that no two concurrent processes are in a critical section at the same time. It is presented to intercept the race condition. If a current process is accessing the critical section then it prevents entering another con
4 min read
Cristian's AlgorithmCristian's Algorithm is a clock synchronization algorithm is used to synchronize time with a time server by client processes. This algorithm works well with low-latency networks where Round Trip Time is short as compared to accuracy while redundancy-prone distributed systems/applications do not go h
8 min read
Berkeley's AlgorithmBerkeley's Algorithm is a clock synchronization technique used in distributed systems. The algorithm assumes that each machine node in the network either doesn't have an accurate time source or doesn't possess a UTC server.Algorithm 1) An individual node is chosen as the master node from a pool node
6 min read
Difference between Token based and Non-Token based Algorithms in Distributed SystemA distributed system is a system in which components are situated in distinct places, these distinct places refer to networked computers which can easily communicate and coordinate their tasks by just exchanging asynchronous messages with each other. These components can communicate with each other
3 min read
RicartâAgrawala Algorithm in Mutual Exclusion in Distributed SystemPrerequisite: Mutual exclusion in distributed systems RicartâAgrawala algorithm is an algorithm for mutual exclusion in a distributed system proposed by Glenn Ricart and Ashok Agrawala. This algorithm is an extension and optimization of Lamport's Distributed Mutual Exclusion Algorithm. Like Lamport'
3 min read
SuzukiâKasami Algorithm for Mutual Exclusion in Distributed SystemPrerequisite: Mutual exclusion in distributed systems SuzukiâKasami algorithm is a token-based algorithm for achieving mutual exclusion in distributed systems.This is modification of RicartâAgrawala algorithm, a permission based (Non-token based) algorithm which uses REQUEST and REPLY messages to en
3 min read
Source Management and Process Management
Distributed File System and Distributed shared memory
What is DFS (Distributed File System)? A Distributed File System (DFS) is a file system that is distributed on multiple file servers or multiple locations. It allows programs to access or store isolated files as they do with the local ones, allowing programmers to access files from any network or computer. In this article, we will discus
8 min read
Andrew File SystemThe Andrew File System (AFS) is a distributed file system that allows multiple computers to share files and data seamlessly. It was developed by Morris ET AL. in 1986 at Carnegie Mellon University in collaboration with IBM. AFS was designed to make it easier for people working on different computers
5 min read
File Service Architecture in Distributed SystemFile service architecture in distributed systems manages and provides access to files across multiple servers or locations. It ensures efficient storage, retrieval, and sharing of files while maintaining consistency, availability, and reliability. By using techniques like replication, caching, and l
12 min read
File Models in Distributed SystemFile Models in Distributed Systems" explores how data organization and access methods impact efficiency across networked nodes. This article examines structured and unstructured models, their performance implications, and the importance of scalability and security in modern distributed architectures
6 min read
File Accessing Models in Distributed SystemIn Distributed File Systems (DFS), multiple machines are used to provide the file systemâs facility. Different file system utilize different conceptual models of a file. The two most usually involved standards for file modeling are structure and modifiability. File models in view of these standards
4 min read
File Caching in Distributed File SystemsFile caching enhances I/O performance because previously read files are kept in the main memory. Because the files are available locally, the network transfer is zeroed when requests for these files are repeated. Performance improvement of the file system is based on the locality of the file access
12 min read
What is Replication in Distributed System?Replication in distributed systems involves creating duplicate copies of data or services across multiple nodes. This redundancy enhances system reliability, availability, and performance by ensuring continuous access to resources despite failures or increased demand.Replication in Distributed Syste
9 min read
Atomic Commit Protocol in Distributed SystemIn distributed systems, transactional consistency is guaranteed by the Atomic Commit Protocol. It coordinates two phasesâvoting and decisionâto ensure that a transaction is either fully committed or completely canceled on several nodes. Distributed TransactionsDistributed transaction refers to a tra
4 min read
Design Principles of Distributed File SystemA distributed file system is a computer system that allows users to store and access data from multiple computers in a network. It is a way to share information between different computers and is used in data centers, corporate networks, and cloud computing. Despite their importance, the design of d
6 min read
What is Distributed Shared Memory and its Advantages?Distributed shared memory can be achieved via both software and hardware. Hardware examples include cache coherence circuits and network interface controllers. In contrast, software DSM systems implemented at the library or language level are not transparent and developers usually have to program th
4 min read
Architecture of Distributed Shared Memory(DSM)Distributed Shared Memory (DSM) implements the distributed systems shared memory model in a distributed system, that hasnât any physically shared memory. Shared model provides a virtual address area shared between any or all nodes. To beat the high forged of communication in distributed system. DSM
3 min read
Difference between Uniform Memory Access (UMA) and Non-uniform Memory Access (NUMA)In computer architecture, and especially in Multiprocessors systems, memory access models play a critical role that determines performance, scalability, and generally, efficiency of the system. The two shared-memory models most frequently used are UMA and NUMA. This paper deals with these shared-mem
5 min read
Algorithm for implementing Distributed Shared MemoryDistributed shared memory(DSM) system is a resource management component of distributed operating system that implements shared memory model in distributed system which have no physically shared memory. The shared memory model provides a virtual address space which is shared by all nodes in a distri
3 min read
Consistency Model in Distributed SystemIt might be difficult to guarantee that all data copies in a distributed system stay consistent over several nodes. The guidelines for when and how data updates are displayed throughout the system are established by consistency models. Various approaches, including strict consistency or eventual con
6 min read
Distributed System - Thrashing in Distributed Shared MemoryIn this article, we are going to understand Thrashing in a distributed system. But before that let us understand what a distributed system is and why thrashing occurs. In naive terms, a distributed system is a network of computers or devices which are at different places and linked together. Each on
4 min read
Distributed Scheduling and Deadlock
Scheduling and Load Balancing in Distributed SystemIn this article, we will go through the concept of scheduling and load balancing in distributed systems in detail. Scheduling in Distributed Systems:The techniques that are used for scheduling the processes in distributed systems are as follows: Task Assignment Approach: In the Task Assignment Appro
7 min read
Issues Related to Load Balancing in Distributed SystemThis article explores critical challenges and considerations in load balancing within distributed systems. Addressing factors like workload variability, network constraints, scalability needs, and algorithmic complexities are essential for optimizing performance and resource utilization across distr
6 min read
Components of Load Distributing Algorithm - Distributed SystemsIn distributed systems, efficient load distribution is crucial for maintaining performance, reliability, and scalability. Load-distributing algorithms play a vital role in ensuring that workloads are evenly spread across available resources, preventing bottlenecks, and optimizing resource utilizatio
6 min read
Distributed System - Types of Distributed DeadlockA Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource occupied by some other process. When this situation arises, it is known as Deadlock. DeadlockA Distributed System is a Network of Machines that can exchange info
4 min read
Deadlock Detection in Distributed SystemsPrerequisite - Deadlock Introduction, deadlock detection In the centralized approach of deadlock detection, two techniques are used namely: Completely centralized algorithm and Ho Ramamurthy algorithm (One phase and Two-phase). Completely Centralized Algorithm - In a network of n sites, one site is
2 min read
Conditions for Deadlock in Distributed SystemThis article will go through the concept of conditions for deadlock in distributed systems. Deadlock refers to the state when two processes compete for the same resource and end up locking the resource by one of the processes and the other one is prevented from acquiring that resource. Consider the
7 min read
Deadlock Handling Strategies in Distributed SystemDeadlocks in distributed systems can severely disrupt operations by halting processes that are waiting for resources held by each other. Effective handling strategiesâdetection, prevention, avoidance, and recoveryâare essential for maintaining system performance and reliability. This article explore
11 min read
Deadlock Prevention Policies in Distributed SystemA Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for a resource that is held by some other process. There are four necessary conditions for a Deadlock to happen which are: Mutual Exclusion: There is at least one resource that is no
4 min read
Chandy-Misra-Haas's Distributed Deadlock Detection AlgorithmChandy-Misra-Haas's distributed deadlock detection algorithm is an edge chasing algorithm to detect deadlock in distributed systems. In edge chasing algorithm, a special message called probe is used in deadlock detection. A probe is a triplet (i, j, k) which denotes that process Pi has initiated the
4 min read
Security in Distributed System
Security in Distributed SystemSecuring distributed systems is crucial for ensuring data integrity, confidentiality, and availability across interconnected networks. Key measures include implementing strong authentication mechanisms, like multi-factor authentication (MFA), and robust authorization controls such as role-based acce
9 min read
Types of Cyber AttacksCyber Security is a procedure and strategy associated with ensuring the safety of sensitive information, PC frameworks, systems, and programming applications from digital assaults. Cyber assaults is general phrasing that covers an enormous number of themes, however, some of the common types of assau
10 min read
Cryptography and its TypesCryptography is a technique of securing information and communications using codes to ensure confidentiality, integrity and authentication. Thus, preventing unauthorized access to information. The prefix "crypt" means "hidden" and the suffix "graphy" means "writing". In Cryptography, the techniques
8 min read
Implementation of Access Matrix in Distributed OSAs earlier discussed access matrix is likely to be very sparse and takes up a large chunk of memory. Therefore direct implementation of access matrix for access control is storage inefficient. The inefficiency can be removed by decomposing the access matrix into rows or columns.Rows can be collapsed
5 min read
Digital Signatures and CertificatesDigital signatures and certificates are two key technologies that play an important role in ensuring the security and authenticity of online activities. They are essential for activities such as online banking, secure email communication, software distribution, and electronic document signing. By pr
11 min read
Design Principles of Security in Distributed SystemDesign Principles of Security in Distributed Systems explores essential strategies to safeguard data integrity, confidentiality, and availability across interconnected nodes. This article addresses the complexities and critical considerations for implementing robust security measures in distributed
11 min read
Distributed Multimedia and Database System
Distributed Database SystemA distributed database is basically a database that is not limited to one system, it is spread over different sites, i.e, on multiple computers or over a network of computers. A distributed database system is located on various sites that don't share physical components. This may be required when a
5 min read
Functions of Distributed Database SystemDistributed database systems play an important role in modern data management by distributing data across multiple nodes. This article explores their functions, including data distribution, replication, query processing, and security, highlighting how these systems optimize performance, ensure avail
10 min read
Multimedia DatabaseA Multimedia database is a collection of interrelated multimedia data that includes text, graphics (sketches, drawings), images, animations, video, audio etc and have vast amounts of multisource multimedia data. The framework that manages different types of multimedia data which can be stored, deliv
5 min read