0% found this document useful (0 votes)
27 views14 pages

Parallel Programming Project RSA

This document presents a project on implementing the RSA algorithm using parallel programming techniques, specifically Unified Parallel C (UPC), Message Passing Interface (MPI), and Compute Unified Device Architecture (CUDA). It explores the complexities and execution times of these paradigms in relation to RSA's key generation, encryption, and decryption processes. The findings highlight the strengths and limitations of each approach, contributing to the understanding of cryptographic implementations in parallel computing.

Uploaded by

Idk Kano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views14 pages

Parallel Programming Project RSA

This document presents a project on implementing the RSA algorithm using parallel programming techniques, specifically Unified Parallel C (UPC), Message Passing Interface (MPI), and Compute Unified Device Architecture (CUDA). It explores the complexities and execution times of these paradigms in relation to RSA's key generation, encryption, and decryption processes. The findings highlight the strengths and limitations of each approach, contributing to the understanding of cryptographic implementations in parallel computing.

Uploaded by

Idk Kano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Parallel Programing Project 2

4eme année cyber sécurité

RSA

Travail réalisé par : Groupe :


DOUHABI Mohamed Cybersécurité TP B

Professeur :
Mr.Bakhouya Mohamed

Page 1 sur 14
Table des matières

I. Introduction : .................................................................................................................. 3
II. RSA Algorithm Overview : ............................................................................................ 3
1. Key Generation : ........................................................................................................ 4
2. Encryption Key (Public Key) Selection: .................................................................... 4
3. Decryption Key (Private Key) Calculation: ............................................................... 4
4. Encryption and Decryption: ....................................................................................... 4
III. Parallel Programing Concepts : ...................................................................................... 4
1. Unified Parallel C (UPC): .......................................................................................... 4
2. Message Passing Interface (MPI): .............................................................................. 5
3. Compute Unified Device Architecture (CUDA): ....................................................... 5
IV. Implementation details : ................................................................................................. 5
1. Unified Parallel C (UPC) Implementation : ............................................................... 5
2. Message Passing Interface (MPI) Implementation : .................................................. 6
3. Compute Unified Device Architecture (CUDA) Implementation : ......................... 10
V. Comparison of Complexity and Time: ......................................................................... 13
VI. Conclusion:................................................................................................................... 14

Page 2 sur 14
I. Introduction :

In the digital age, the RSA algorithm plays an essential role in securing online
communications and transactions. Developed in 1977, it uses asymmetric key
cryptography, relying on the challenge of factoring large numbers into their prime
components for security.

This project focuses on implementing RSA using parallel programming techniques -


Unified Parallel C (UPC), Message Passing Interface (MPI) and Compute Unified
Device Architecture (CUDA). Our aim is to explore and compare the computational
complexities and execution times of these paradigms, providing insights into their
effectiveness for RSA implementation.

By navigating through key generation, the computation of Euler's Totient function and
the central task of module factorization, we seek to unravel the synergy between the
mathematical complexities of RSA and the parallel programming models employed.
Through this exploration, we aim to contribute to valuable results in the field of
implementing cryptographic algorithms in parallel computing, offering a nuanced
understanding of the interplay between the foundations of RSA and parallel processing
paradigms.

II. RSA Algorithm Overview :

The RSA algorithm, named after its inventors Rivest, Shamir and Adleman, is a widely
used public-key cryptosystem. Its security is based on the challenge of factoring the
product of two large prime numbers. The algorithm works as follows:

Page 3 sur 14
1. Key Generation :

• Choose two distinct large prime numbers, p, and q.


• Compute the modulus m=p×q.
• Calculate Euler's totient function ϕ(m)=(p-1) x (q-1).

2. Encryption Key (Public Key) Selection:

• Choose an encryption exponent r that is prime with ϕ(m)(1<r< ϕ(m))


• Ensure gcd (ϕ(m), r) = 1 (r and ϕ are coprime )

3. Decryption Key (Private Key) Calculation:

• Compute the decryption exponent s such that r x s= 1 mod ϕ(m).


• The public key is (m,r), and the private key is (m,s).

4. Encryption and Decryption:

• To encrypt a message M, raise it to the power of r modulo m : C ≡M ^r mod


m.
• To decrypt C and retrieve the original message M , raise C to the power of s
modulo m: M≡C ^s mod m.

The security of RSA relies on the difficulty of factoring the modulus m into its prime
components p and q . Without knowledge of p and q , breaking the RSA encryption is
computationally infeasible

III. Parallel Programing Concepts :

In the field of parallel programming, three distinct paradigms – UPC (unified Parallel
C), MPI (Message Passing Interface) and CUDA (Compute Unified Device
Architecture) - are emerging as powerful tools for harnessing the computing power of
modern systems.
1. Unified Parallel C (UPC):
UPC is a parallel programming extension of the C programming language,
designed for high-performance computing on parallel architectures. It features a
shared memory model, allowing multiple processors to access a common
memory space. Threads communicate via shared variables, simplifying
programming for shared-memory systems.

Page 4 sur 14
2. Message Passing Interface (MPI):
MPI is a widely used standard for message-passing libraries in parallel
computing. It excels in distributed memory systems, enabling communication
between independent processes running on separate nodes. MPI
implementations generally involve sending and receiving messages, facilitating
collaboration between parallel processes.

3. Compute Unified Device Architecture (CUDA):


CUDA is a parallel computing platform and application programming interface
model developed by NVIDIA. It is specially designed for general-purpose
parallel processing on graphics processing units (GPUs). CUDA enables
developers to offload parallelizable tasks onto the GPU, taking advantage of its
massively parallel architecture to improve computing speed.

Understanding these paradigms is essential for implementing the RSA algorithm in a


parallelized environment. UPC facilitates shared-memory parallelism, MPI excels in
distributed systems and CUDA exploits GPU parallelism to accelerate computations.
The following sections discuss the customized application of each paradigm in the
realization of efficient parallel implementations of the RSA algorithm.

IV. Implementation details :

1. Unified Parallel C (UPC) Implementation :

UPC simplifies parallelization by using a shared memory model. The workload


is distributed between threads, with each thread operating on a subset of the
data. Shared variables benefit key generation, encryption, and decryption

Page 5 sur 14
processes. UPC is easy to use for shared-memory systems, which streamlines
the implementation of RSA's modular arithmetic operations.

• The provided code implements the RSA (Rivest-Shamir-Adleman) cryptographic


algorithm in Unified Parallel C (UPC). Its purpose is to generate public and private
keys for secure communication. The code leverages shared-memory parallelism to
enhance the efficiency of the key generation process.

• The shared_gcd function is responsible for calculating the greatest common divisor,
a fundamental operation in the RSA algorithm. Additionally, the shared_generate
Keys function computes the values of n (the product of prime numbers "p" and "q"),
phi (Euler's totient function), and iteratively determines suitable values for "e" and
"d" using the GCD function

• To maintain synchronization among threads, the code employs the upc_barrier


function. Once the key generation process is complete, the public and private keys
are printed to the console

2. Message Passing Interface (MPI) Implementation :

In MPI, where processes communicate via messages, the distributed nature of


RSA's key generation poses challenges. We designate a master process for key
generation and distribute the public key to all processes. Each process operates
on its share of data for encryption and decryption.

Page 6 sur 14
Page 7 sur 14
Page 8 sur 14
• The program begins by generating public and private keys essential for the RSA
encryption process. Subsequently, the user is prompted to input a message for
encryption. The program then broadcasts and encrypts the message concurrently
across MPI processes.

• After encryption, the program proceeds with decryption, reconstructing the


original message. Finally, the decrypted message is displayed to the user. This code
serves to exemplify the parallel nature of RSA encryption and decryption,
showcasing the utilization of MPI for distributed computing and GMP for precise
arithmetic operations.

• This implementation highlights the effectiveness of MPI in parallelizing the RSA


algorithm, demonstrating its potential for efficient distributed computing.
Additionally, the incorporation of the GMP library ensures accurate handling of
large integers, a crucial aspect of cryptographic operations.

Output:

Page 9 sur 14
3. Compute Unified Device Architecture (CUDA) Implementation :

CUDA allows us to harness the parallel processing power of GPUs. We exploit


the GPU's architecture to parallelize modular exponentiation, a fundamental
operation in RSA. The encryption and decryption phases can be significantly
accelerated through parallel execution on the GPU.

Page 10 sur 14
Page 11 sur 14
• Header inclusions: The code includes essential headers for input/output,
mathematical operations, and string manipulation.

• Global Variables: Key variables are declared for prime numbers, mathematical
operations, and message storage.

Page 12 sur 14
• Function Declarations: Functions for primality testing, GCD calculation, key
generation, encryption, and decryption are declared.

• Main Function: Users input prime numbers and a message. Prime numbers are
chosen for their resistance against certain mathematical attacks. Their unique
factorization properties contribute to the security of the RSA algorithm.

• Core Functions: Functions handle primality checks, key generation, private key
calculation, encryption, and decryption.

➢ The code performs RSA encryption and decryption to secure communication. It uses
user-provided prime numbers to generate keys crucial for encryption. The emphasis
on prime numbers and CUDA parallelization enhances security and computational
efficiency, showcasing the practical application of RSA in cryptography.

Output:

V. Comparison of Complexity and Time:

Each parallel programming paradigm brings its own set of complexities and efficiencies
to the RSA implementation. UPC, with its shared memory model, simplifies
parallelization and facilitates seamless communication among threads. However, it may
face limitations in scalability due to shared memory constraints.

Page 13 sur 14
MPI, designed for distributed systems, excels in handling the challenges of parallelism
across multiple nodes. Its message-passing approach ensures effective communication
but introduces overhead in message transmission. The implementation's efficiency relies
on balancing workloads and minimizing communication delays.

CUDA, optimized for GPU parallelism, stands out in accelerating cryptographic


operations through massive parallelism. The use of GPUs enhances computational
speed, especially in modular exponentiation. However, the effectiveness depends on the
GPU's architecture and may not be universally applicable to all systems.

In terms of time complexity, UPC and MPI implementations might face challenges in
scaling linearly with the increasing number of threads or processes. Communication
overhead and synchronization could impact performance. On the other hand, CUDA,
leveraging GPU parallelism, demonstrates potential for significant time reduction,
especially in computationally intensive tasks.

Ultimately, the choice of parallel programming paradigm depends on the specific


requirements of the system and the computational resources available. This project
provides valuable insights into the trade-offs and considerations in selecting an
appropriate paradigm for parallelizing the RSA algorithm, contributing to the broader
understanding of cryptographic implementations in parallel computing environments.

VI. Conclusion:

In conclusion, this parallel programming project delves into the implementation of the
RSA algorithm using three distinct paradigms: Unified Parallel C (UPC), Message
Passing Interface (MPI), and Compute Unified Device Architecture (CUDA). The RSA
algorithm, renowned for its role in securing online communication, relies on the
mathematical complexity of modular arithmetic and prime factorization.

The exploration of UPC reveals its suitability for shared-memory systems, streamlining
the parallelization of RSA's key generation, encryption, and decryption processes. The
use of MPI demonstrates effective distributed computing, with a master process
handling key generation and subsequent distribution to parallel processes. Lastly,
CUDA harnesses the parallel processing power of GPUs, significantly accelerating
modular exponentiation for encryption and decryption.

Page 14 sur 14

You might also like