Parallel Algorithm Design Principles and Programming (2)
Parallel Algorithm Design Principles and Programming (2)
• In parallel algorithm design, scheduling and contention are related to the distribution of tasks among
processors and how to manage conflicts that arise:
Scheduling
• In parallel computing, scheduling is the process of distributing tasks among multiple processors to
maximize performance. The goal is to reduce communication costs by assigning tasks to the right
processors.
Contention
• Contention management is the process of resolving conflicts that arise when transactions collide. The
goal is to ensure that conflicting transactions are executed in a serialized manner, and to dynamically
adjust the level of parallelism between threads.
Task mapping
• An appropriate mapping of tasks to processes is critical to an algorithm's performance. Task
dependency graphs and task interaction graphs can help determine the mapping.
Here are some tips for mapping tasks to processes:
• Map independent tasks to different processes
• Assign tasks on the critical path to processes as soon as they become available
• Minimize interaction between processes by mapping tasks with dense interactions to the same process
Independence and Partitioning
• In parallel algorithm design, independence and partitioning are two fundamental
principles that help in breaking down a problem into smaller parts that can be solved
concurrently-
Independence
• Independence refers to the degree to which different tasks or operations in a problem
can be executed without interfering with each other.
• Key Concepts:
• Data Independence: Ensures that tasks operate on disjoint data sets, reducing the need for
synchronization.
• Task Independence: Focuses on the logical independence of operations, meaning tasks do not
depend on intermediate results from other tasks.
• Advantages of Independence:
• Minimizes communication and synchronization overhead.
• Partitioning
• Partitioning is the process of dividing a problem into smaller, more manageable sub-
problems that can be solved in parallel.
• Types of Partitioning:
• Data Partitioning: Dividing the data among multiple processors or threads.
• Example: Splitting an array into chunks for parallel processing.
• Task Partitioning: Dividing the operations or computations into independent tasks.
• Example: Assigning different operations (e.g., filtering, sorting) to different processors.
• Key Goals:
• Balance the workload among processors (load balancing).
• Minimize inter-process communication to reduce overhead.
• Techniques:
• Static Partitioning: Tasks are assigned before execution and remain fixed.
• Dynamic Partitioning: Tasks are assigned at runtime based on availability or workload.
• Example:
• MapReduce Framework:
• The Map phase partitions the input data into key-value pairs.