0% found this document useful (0 votes)
4 views8 pages

Parallel Algorithm Design Principles and Programming (2)

The document discusses the principles of communication, coordination, and synchronization in parallel algorithm design, emphasizing their importance for efficient process collaboration. It covers scheduling, contention management, and the significance of task independence and partitioning for optimizing performance. Additionally, it highlights techniques for task mapping and partitioning to enhance workload balance and minimize communication overhead.

Uploaded by

boppana200312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views8 pages

Parallel Algorithm Design Principles and Programming (2)

The document discusses the principles of communication, coordination, and synchronization in parallel algorithm design, emphasizing their importance for efficient process collaboration. It covers scheduling, contention management, and the significance of task independence and partitioning for optimizing performance. Additionally, it highlights techniques for task mapping and partitioning to enhance workload balance and minimize communication overhead.

Uploaded by

boppana200312
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Parallel algorithm

design principles and


programming
UNIT-II
Need for communication and
coordination/synchronization
• In parallel algorithm design and programming, communication, coordination, and
synchronization are essential for ensuring that multiple processes or threads work
together efficiently to solve a problem.
• Communication: The exchange of data or messages between parallel processes or threads.

Why it's needed:


• Data sharing: Parallel processes often work on different parts of the data but may need to
share intermediate results.
• Task dependency: A process may require the output of another process as input for its
computation.
• Scalability: Efficient communication mechanisms allow the system to scale across multiple
cores, nodes, or clusters.
Need for communication and
coordination/synchronization
• Coordination: The management of task execution order among parallel processes
to achieve the overall program goal.
Why it's needed:
• Load balancing: Ensuring all processes have equal or proportional workloads.
• Avoiding deadlocks: Preventing situations where processes are waiting
indefinitely for each other.
• Task assignment: Allocating tasks to processes dynamically or statically.
Need for communication and
coordination/synchronization
• Synchronization: Mechanisms to ensure that processes access shared resources
(e.g., memory, files) in a controlled manner.

Why it's needed:


• Avoiding race conditions: Ensuring correctness when multiple processes try to
update shared data.
• Ensuring consistency: Maintaining data integrity when multiple
threads/processes interact.
• Task dependencies: Ensuring one process doesn't proceed until another finishes
its work.
Example
• Parallel Sorting (e.g., Merge Sort):
• Communication: Each thread works on a sub-array and communicates results to a
master thread for merging.
• Coordination: Threads are assigned sub-arrays dynamically to balance workload.
• Synchronization: Merging results requires synchronization to avoid race
conditions when writing to the final array.
Scheduling and Contention

• In parallel algorithm design, scheduling and contention are related to the distribution of tasks among
processors and how to manage conflicts that arise:
Scheduling
• In parallel computing, scheduling is the process of distributing tasks among multiple processors to
maximize performance. The goal is to reduce communication costs by assigning tasks to the right
processors.
Contention
• Contention management is the process of resolving conflicts that arise when transactions collide. The
goal is to ensure that conflicting transactions are executed in a serialized manner, and to dynamically
adjust the level of parallelism between threads.
Task mapping
• An appropriate mapping of tasks to processes is critical to an algorithm's performance. Task
dependency graphs and task interaction graphs can help determine the mapping.
Here are some tips for mapping tasks to processes:
• Map independent tasks to different processes
• Assign tasks on the critical path to processes as soon as they become available
• Minimize interaction between processes by mapping tasks with dense interactions to the same process
Independence and Partitioning
• In parallel algorithm design, independence and partitioning are two fundamental
principles that help in breaking down a problem into smaller parts that can be solved
concurrently-
Independence
• Independence refers to the degree to which different tasks or operations in a problem
can be executed without interfering with each other.
• Key Concepts:
• Data Independence: Ensures that tasks operate on disjoint data sets, reducing the need for
synchronization.
• Task Independence: Focuses on the logical independence of operations, meaning tasks do not
depend on intermediate results from other tasks.
• Advantages of Independence:
• Minimizes communication and synchronization overhead.
• Partitioning
• Partitioning is the process of dividing a problem into smaller, more manageable sub-
problems that can be solved in parallel.
• Types of Partitioning:
• Data Partitioning: Dividing the data among multiple processors or threads.
• Example: Splitting an array into chunks for parallel processing.
• Task Partitioning: Dividing the operations or computations into independent tasks.
• Example: Assigning different operations (e.g., filtering, sorting) to different processors.
• Key Goals:
• Balance the workload among processors (load balancing).
• Minimize inter-process communication to reduce overhead.
• Techniques:
• Static Partitioning: Tasks are assigned before execution and remain fixed.
• Dynamic Partitioning: Tasks are assigned at runtime based on availability or workload.
• Example:
• MapReduce Framework:
• The Map phase partitions the input data into key-value pairs.

You might also like