Unit 2 - 2.2 (Basic Algorithms)
Unit 2 - 2.2 (Basic Algorithms)
DISCOVER . LEARN .
EMPOWER
CSE Operating System(21CST-313/21ITH-313) 1
Department of Computer Science and Engineering (CSE)
Content
•Parallel Algorithms
Challenges
The primary challenges include load balancing to ensure all processors have an equal amount
of work, minimizing communication overhead to reduce data transfer between processors,
and scalability to maintain performance improvements with an increasing number of
processors.
Another widely used algorithm is the bitonic sort, a specialized sorting network particularly effective for
datasets where the size is a power of two. Bitonic sort works by constructing bitonic sequences, which
are sequences that monotonically increase and then decrease or vice versa. These sequences are then
merged in parallel to produce a fully sorted list. This method is highly structured and predictable, making
it suitable for hardware implementations.
Odd-even transposition sort is a simple algorithm that relies on iterative compare-and-exchange
operations between neighboring elements. In each phase, adjacent elements are compared and
swapped if necessary, ensuring that smaller elements gradually move to the beginning of the array.
Alternate phases compare odd and even indexed pairs, gradually sorting the entire dataset. This
algorithm is easy to implement but can be slower than others for large datasets.
Sample sort is another approach that combines the advantages of partitioning and parallelization. In this
algorithm, a small sample of the data is chosen and sorted to determine pivot points. The dataset is then
divided into partitions based on these pivots, with each partition assigned to a processor. Each processor
sorts its partition independently, and the sorted partitions are combined to produce the final sorted
array. Sample sort is particularly effective for unevenly distributed data as it minimizes load imbalance
among processors.
b. Challenges
The main challenges in parallel sorting include communication overhead during partitioning
and merging, as processors must exchange data to ensure global order. Additionally,
maintaining global ordering while allowing local independence for individual processors can
be complex. Efficient parallel sorting algorithms must address these issues while ensuring
scalability and minimizing synchronization costs.
3. Searching in Parallel Systems
Efficient data retrieval is essential for large-scale systems.
a. Parallel Searching Algorithms
Binary search is implemented in parallel by distributing the array among processors. Each
processor performs a binary search on its portion of the array, and the results are combined to
determine the target’s location. Hash-based searching distributes data across processors using a
hash function, allowing search queries to be routed to the appropriate processor. Breadth-first
search (BFS) is used for graph traversals in parallel, where nodes at the current level are
processed in parallel, generating the next level’s nodes. Depth-first search (DFS), although more
challenging to parallelize, can be implemented using task-based frameworks.
b. Challenges
Challenges include balancing workload, particularly for unstructured data, synchronization to
ensure correct results, and efficient memory access patterns to optimize performance.