0% found this document useful (0 votes)
18 views8 pages

BCA, BSC 3rd Semester Computer Architecture

Parallel processing refers to techniques that enable simultaneous data processing to enhance computational speed. It can be classified into four categories based on Flynn's classification: SISD, MISD, MIMD (shared-memory and distributed-memory). Each classification has distinct characteristics and operational methods, impacting the performance and scalability of computing systems.

Uploaded by

faizan khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views8 pages

BCA, BSC 3rd Semester Computer Architecture

Parallel processing refers to techniques that enable simultaneous data processing to enhance computational speed. It can be classified into four categories based on Flynn's classification: SISD, MISD, MIMD (shared-memory and distributed-memory). Each classification has distinct characteristics and operational methods, impacting the performance and scalability of computing systems.

Uploaded by

faizan khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Parallel processing :

1.​ Parallel processing is a term used for a large class of techniques that are used to
provide simultaneous data processing tasks for the purpose of increasing the
computational speed of a computer.
2.​ A parallel processing system is able to perform concurrent data processing to achieve
faster execution time.
3.​ Parallel processing at a higher level of complexity can be achieved by having a
multiplicity of functions that perform operations simultaneously.
example : arithmetic,logic, and shift operation.
An instruction is being executed in the ALU , the next instruction can be read from the memory.
●​ The execution unit separated into eight functional units operating parallel.
●​ The operation specified by the instructions associated with operands.
●​ The adder and integer multiplier perform the arithmetic operations with integer numbers.
●​ The floating point operations are separated into three circuit operations in parallel -
The logic ,shift and increment operations can be performed concurrently on different
data. All units are independent of each other.

Parallel processing can be classified. One classification introduced by M.J.Flynn.


Flynn classification divided into four major groups :-
Single-Instruction, Single-Data (SISD) Systems

An SISD computing system is a uniprocessor machine which is capable of


executing a single instruction, operating on a single data stream. In SISD,
machine instructions are processed in a sequential manner, and computers
adopting this model are popularly called sequential computers. Most
conventional computers have SISD architecture. All the instructions and data to
be processed have to be stored in primary memory.
The speed of the processing element in the SISD model is limited(dependent) by
the rate at which the computer can transfer information internally. Dominant
representative
Example : IBM PC, workstations.

Multiple-Instruction, Single-Data (MISD) systems


Multiple-Instruction, Single-Data (MISD) systems

An MISD computing system is a multiprocessor machine capable of executing


different instructions on different PEs but all of them operating on the same
dataset.
Example Z = sin(x)+cos(x)+tan(x) The system performs different operations on
the same data (x).

and MISD machines, PEs in MIMD machines work asynchronously.


MIMD machines are classified into shared-memory and distributed-memory
models depending on how processors connect to memory.

shared-memory MIMD system :

(Tightly coupled)All processors use the same global memory, and communication
happens through it. Any change made by one processor is visible to all others.

Shared-memory systems are easier to program but harder to scale and more
vulnerable to failures, since a fault can affect the whole system.

Examples : Silicon Graphics and Sun/IBM’s SMP systems.

Distributed-memory MIMD system

(loosely coupled), each processor has its own local memory, and they
communicate through an interconnection network (like tree or mesh).
distributed-memory systems are more scalable and fault-tolerant, since each
processor is independent. For real-world use, distributed-memory MIMD is
generally considered

You might also like