0% found this document useful (0 votes)
27 views3 pages

Project Proposal

Uploaded by

coxigalu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views3 pages

Project Proposal

Uploaded by

coxigalu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

All Pair Shortest Path using

MPI

Your Name
Atharva Pande(2023201065)
Omprateek Shrivastava(2023201069)
1

1. Statement of the Problem


The objective of this project is to implement the All-Pairs Shortest Path (APSP) algorithm
using Message Passing Interface (MPI) to compute the shortest paths between every pair
of vertices in a given weighted graph. The challenge is to efficiently parallelize this
algorithm in a distributed computing environment to reduce computational time and
handle large graph datasets. The core task involves dividing the graph and its computations
across multiple processes using MPI's parallel communication mechanisms. Each process
will compute a portion of the paths and share intermediate results with other processes to
ensure global consistency. The solution will handle aspects such as data partitioning,
synchronization, and minimizing communication overhead between processes to achieve
optimal performance.

2. Important papers/material that you have read so far or you plan to


read in the context of the project
The following papers/materials are what we have either read and/or are planning to do so.

1. Floyd-Warshall Algorithm
2. Efficient All-Pairs Shortest Paths Using MPI on Distributed Memory Systems: Link to
paper. This paper focuses on optimizing communication patterns in distributed
implementations of APSP, a key concern when using MPI.
3. Papers discussing parallel implementations of graph algorithms using MPI.
4. Understanding the core functionality of MPI, including point-to-point
communication, collective communication, and synchronization.

3. Scope of the project:


Since this is just an algorithm for distributed systems, we will be just implementing the All
pair shortest path using MPI algorithm for distributed systems.
2

4. Approach to the Solution


We will implement the APSP algorithm using MPI in the following steps:

1. Graph Representation: The graph will be represented as an adjacency matrix,


where the value at position (i, j) represents the weight of the edge between node i
and node j.
2. Parallelization Strategy: We will divide the matrix into chunks that are distributed
to different processes. Each process will be responsible for computing a portion of
the shortest paths.
3. Communication Between Processes: We will use MPI's point-to-point and
collective communication methods to synchronize data across processes after each
step of the algorithm. This will ensure that each process has the necessary data to
compute the next step of the APSP algorithm.
4. Technologies:
○ Programming: The project will be written in C/C++ with MPI for parallel
computation.
○ Testing: For testing, we plan to use synthetic graph datasets with varying
sizes to evaluate the performance and scalability of the implementation.
○ Dataset: Custom generated input test cases.

5. Timeline
Week 1 (Project Setup): Set up the project environment, ensure all teammates are familiar
with MPI, and test simple MPI programs.

Week 2 (Graph Representation and Sequential APSP): Implement the basic graph
representation and the sequential version of the APSP algorithm to establish a baseline.

Week 3 (Initial Parallelization): Parallelize the APSP algorithm using MPI and implement
basic communication patterns.

Week 4 (Optimization and Scaling): Optimize the parallel algorithm to minimize


communication overhead and test scaling on larger datasets.

Week 5 (Testing and Finalization): Test the implementation on real-world datasets, refine
the code based on results, and prepare the final report and presentation.

You might also like