0% found this document useful (0 votes)
64 views34 pages

Introduction

This document discusses parallel and distributed computing systems. It begins by defining parallel computing as breaking a large problem into smaller pieces that can be solved simultaneously. Distributed systems are described as systems where components are located on separate networked computers that communicate to achieve a common goal. Examples of parallel architectures include multiprocessors/multicomputers and shared memory systems. Performance is improved through parallelization by utilizing more processors to solve problems faster.

Uploaded by

josephboyong542
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views34 pages

Introduction

This document discusses parallel and distributed computing systems. It begins by defining parallel computing as breaking a large problem into smaller pieces that can be solved simultaneously. Distributed systems are described as systems where components are located on separate networked computers that communicate to achieve a common goal. Examples of parallel architectures include multiprocessors/multicomputers and shared memory systems. Performance is improved through parallelization by utilizing more processors to solve problems faster.

Uploaded by

josephboyong542
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

BNCT 312 PARALLEL & DISTRIBUTED

SYSTEM

COLLEGE OF COMPUTING & INFORMATICS

DEPARTMENT OF SYSTEM ENGINEERING


Introduction to Parallel Computing
Parallel computing is a type of computation in which
many calculations or the execution of processes are
carried out simultaneously.

Large problems can often be divided into smaller ones,


which can then be solved at the same time
In simple terms, parallel computing is
breaking up a task into smaller pieces and
executing those pieces at the same time, each
on their own processor or on a set of
computers that have been networked
Example of Parallel Computing

Let's look at a simple example. Say we have the following equation:


• Y = (4 x 5) + (1 x 6) + (5 x 3)
On a single processor, the steps needed to calculate a value for Y might look like:
• Step 1: Y = (4 x 5 )+ (1 x 6) + (5 x 3)
• Step 2: Y = 20 + 6 + (5 x 3)
• Step 3: Y = 20 + 6 + 15
• Step 4: Y = 41
But in a parallel computing scenario, with three processors or computers, the steps look
something like:
• Step 1: Y = 20 + 6 + 15
• Step 2: Y = 41
Distributed Systems
• A distributed system is a system whose
components are located on different networked
computers, which communicate and coordinate their
actions by passing messages to one another. The
components interact with one another in order to
achieve a common goal.
•A distributed system contains multiple
nodes that are physically separate but
linked together using the network. All
the nodes in this system communicate
with each other. Each of these nodes
contains a small part of the distributed
operating system software
A diagram to better explain the distributed system
Difference between Parallel & Distributed
Systems
Parallel Systems Vs Distributed System
Memory: Tightly coupled system shared Loosely coupled system

Control: There is a global clock control There is no global clock control

Processors
Interconnection: The order is tbs The order is gbs
Main focus: main focus is on performance and Main focus is on performance
scientific computing of cost and scalability,
reliability and resource sharing
Types of Distributed Systems
The nodes in the distributed systems can be arranged in
the form of client/server systems or peer to peer systems
a. Client/Server Systems
In client server systems, the client requests a resource and
the server provides that resource. A server may serve
multiple clients at the same time while a client is in
contact with only one server. Both the client and server
usually communicate via a computer network and so they
are a part of distributed systems.
b. Peer to Peer Systems
The peer to peer systems contains
nodes that are equal participants in
data sharing. All the tasks are equally
divided between all the nodes. The
nodes interact with each other as
required as share resources. This is
done with the help of a network.
Characteristics of Distributed Systems
a. Resource Sharing
Computers in distributed systems shares resources
like hardware (disks and printers), software (files,
windows and data objects) and data
b. Heterogeneity
In distributed systems components can have variety and
differences in Networks, Computer hardware, Operating
systems, Programming languages and implementations by
different developers
c. Transparency
Communication is hidden from users.
d. Scalability
It will remain effective when there is a significant
increase in the number of users and the number of
resources
e. Concurrency
Concurrency is a property of a system representing
the fact that multiple activities are executed at the
same time
​Parallel architectures
Parallel architectures are a sub-class of
distributed computing where the processes
are all working to solve the same problem
A parallel computer is a collection of
processing elements that cooperate to
solve large problems fast.
Parallel computer architecture adds a new
dimension in the development of computer
system by using more and more number of
processors. In principle, performance
achieved by utilizing large number of
processors is higher than the performance
of a single processor at a given point of
time.
​Parallel Architectures Types
1.Multiprocessors and Multicomputers
• Multiprocessors
• Multicomputers
2. Shared-Memory Multicomputers
In this model, all the processors share the physical
memory uniformly. All the processors have equal access
time to all the memory words. Each processor may have a
private cache memory. Same rule is followed for
peripheral devices.
When all the processors have equal access to all the peripheral
devices, the system is called a symmetric multiprocessor. When
only one or a few processors can access the peripheral devices,
the system is called an asymmetric multiprocessor.
​Non-uniform Memory Access (NUMA)
In NUMA multiprocessor model, the access time varies with the location of the
memory word. Here, the shared memory is physically distributed among all the
processors, called local memories. The collection of all local memories forms a
global address space which can be accessed by all the processors.
​Cache Only Memory Architecture (COMA)
The COMA model is a special case of the NUMA model. Here,
all the distributed main memories are converted to cache
memories
Performance of parallel computers
Parallel computing is breaking up a
task into smaller pieces and executing
those pieces at the same time, each on
their own processor or computer. An
increase in speed is the
main performance characteristic.
Performance metrics for processors
Computer performance metrics (things
to measure) include availability, response time,
channel capacity, latency, completion time,
service time, bandwidth, throughput, relative
efficiency, scalability, performance per watt,
compression ratio, instruction path length and
speed up. CPU benchmarks are available.
How CPU is measured
The most common measure of CPU speed
is the clock speed, which is measured in
MHz or GHz. One GHz equals 1,000 MHz,
so a speed of 2.4 GHz could also be
expressed as 2,400 MHz. The higher the
clock speed, the more operations
the CPU can execute per second.
Parallel Computing Models
A parallel programming model is a set of
software technologies to express parallel
algorithms and match applications with the
underlying parallel systems. It encloses the
areas of applications, languages, compilers,
libraries, communication systems, and
parallel I/O
Types of Parallel Computing Models

1. Shared Memory model


2. Message passing model
3. Thread model
4. Data Parallel model
1. Shared Memory model
In this type, the programmer views his program as
collection of processes which use common or shared
variables.
The processor may not have a private program or
data memory. A common program and data are
stored in the main memory. This is accessed by all
processors.
Each processor is assigned a different part of
the program and the data. The main program
creates separate process for each processor.
The process is allocated to the processor along
with the required data. These process are
executed indecently on different processors.
After the execution, all the processors have to
rejoin the main program,
Shared Memory model
2. Message Passing Model
In this type, different processes
may be present on single
multiple processors. Every
process has its own set of data
•The data transfer between the
processes is achieved by send and
receive message requires co-
operation between every process.
•There must be a receive operation
for every send operation.
Message Passing Model
3. Threads Model
•A thread is defined as a short
sequence of instructions, within
a process. Different threads can
be executed on same processor
or on different processor.
•If the threads are executed on same
processor, then the processor switches
between the threads in a random fashion.
•If the threads are executed on different
processors, they are executed
simultaneously.
•The threads communicate through global
memory
Threads Model
4. Data Parallel Model
• Data parallelism is one of the simplest form of
parallelism. Here data set is organized into a
common structure. It could be an array.
• Many programs apply the same operation on
different parts of the common structurers
• Suppose the task is to add two arrays of 100
elements store the result in another array. If there
are four processor then each processor can do 25
additions.

You might also like