Sort Report
Sort Report
Abstract—This project investigates the performance of various to separate sections using OpenMP sections. The recur-
sorting algorithms in both serial and parallel implementations sion depth was controlled using a ‘depth‘ parameter to
using OpenMP. The algorithms explored include Bubble Sort, avoid oversubscription of threads.
Merge Sort, and Quick Sort. The primary objective is to analyze
and compare the execution time and scalability of each algorithm • Quick Sort: The parallel Quick Sort implementation
when subjected to increasing input sizes, while leveraging paral- uses recursive partitioning. After selecting a pivot and
lelism to enhance performance. Experimental results demonstrate partitioning the array, two recursive calls are made to sort
that, while serial sorting algorithms are simple to implement, they the subarrays. These recursive calls are parallelized using
become #pragma omp parallel sections, allowing in-
dependent sections of the array to be sorted concurrently.
I. I NTRODUCTION
A depth parameter is used to limit recursion-based paral-
This project focuses on investigating parallel programming lelism to avoid overhead due to excessive thread creation.
using OpenMP, particularly in the context of sorting algo- When the depth limit is reached, the function falls back
rithms. The goal is to compare the performance of serial to the serial Quick Sort version.
and parallel implementations of three well-known sorting
algorithms, Bubble, Quick Sort, and Merge Sort, on ran- D. Performance Measurement
domly generated numbers, providing insight into how parallel Time was measured using omp_get_wtime() for all
programming impacts computational efficiency. The project sorting variants, and results were stored in a CSV file for
involves creating serial and parallel implementations for each analysis. Matplotlib was used to plot graph for comparison of
of the three sorting algorithms, along with checking if the array time taken and speedup.
is sorted or not, to benchmark the results. The performance
of these algorithms is evaluated by measuring the execution III. R ESULTS AND D ISCUSSION
times across varying input sizes, along with calculating and Sorting was tested on arrays of sizes: 1000, 5000, 10000,
plotting the speedup. 20000, 50000, and 100000. The table below shows average
execution time (in seconds) for each algorithm.
II. M ETHODOLOGY
TABLE I: Execution Time of Sorting Algorithms (in seconds)
The sorting algorithms were implemented in C with both
serial and parallel versions. OpenMP was used to parallelize Size SB PB SM SQ PM PQ
1000 0.004 0.091 0.000 0.000 0.001 0.000
loops and recursive calls where applicable. 5000 0.041 0.353 0.001 0.000 0.002 0.000
10000 0.179 0.777 0.002 0.002 0.002 0.000
A. Data Generation 20000 1.193 1.926 0.006 0.002 0.004 0.002
50000 8.372 6.736 0.015 0.006 0.008 0.005
Random integer arrays of varying sizes (1000 to 100000 100000 33.348 15.025 0.030 0.014 0.018 0.008
elements) were generated using rand() % 100000. SB: Serial Bubble, PB: Parallel Bubble, SM: Serial Merge, SQ:
Serial Quick, PM: Parallel Merge, PQ: Parallel Quick
B. Serial Implementations
Standard implementations of Bubble Sort, Merge Sort, and TABLE II: Speedup of Parallel Sorting Algorithms
Quick Sort were written as benchmarks.
Size PB Speedup PM Speedup PQ Speedup
C. Parallel Implementations 1000 0.044 – –
5000 0.116 0.500 –
• Bubble Sort: Used even-odd phase parallelism with 10000 0.230 1.000 –
20000 0.620 1.500 1.000
#pragma omp for. 50000 1.243 1.875 1.200
• Merge Sort: Parallelism was introduced using #pragma 100000 2.220 1.667 1.750
omp parallel sections. The merge sort divides Speedup = Serial Time / Parallel Time. ’–’ indicates division by
the array recursively, and two recursive calls were made zero or undefined.
This chart highlights the inefficiency of bubble sort as the ACKNOWLEDGMENT
input size increases as it has time complexity of O(n2 ), We would like to thank our faculty and institution. The
the parallel version is though slightly efficient yet has the project allowed us to explore parallel programming concepts
same time complexity, substantial gains are achieved using and apply them to sorting algorithms using OpenMP, deepen-
parallelization for merge and quick sort. ing our understanding through hands-on implementation and
analysis.
R EFERENCES
[1] OpenMP Architecture Review Board, OpenMP Application Program
Interface, Version 4.5, Nov. 2015.
[2] T. H. Cormen et al., Introduction to Algorithms, 3rd ed., MIT Press,
2009.
[3] Y. Liang and G. Shen, “Parallel Sorting Algorithm Research Based on
OpenMP,” in Proc. Int. Conf. Comput. Sci. Inf. Tech., 2012.