0% found this document useful (0 votes)
29 views15 pages

Pdclab 6

Uploaded by

210316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views15 pages

Pdclab 6

Uploaded by

210316
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

AIR UNIVERSITY

DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING

EXPERIMENT NO. 6

Lab Title: Introduction to Parallel Programming with CudaC: Exploring CUDA

C Programming: Convolution, Fork and Pthread.

Student Name: M.Bilal Ijaz, Agha Ammar Khan Reg. No:210316,210300

Objective: Implement and analyze various 2D array/matrix operations in CUDAC

LAB ASSESSMENT:

Attributes Excellent Good Average Satisfactory Unsatisfactory

(5) (4) (3) (2) (1)

Ability to Conduct
Experiment

Ability to assimilate the


results

Effective use of lab


equipment and follows the
lab safety rules

Total Marks: Obtained Marks:

LAB REPORT ASSESSMENT:

Attributes Excellent Good Average Satisfactory Unsatisfactory

(5) (4) (3) (2) (1)

Data Presentation

Experiment Results

Conclusion

Total Marks: Obtained Marks:

Date: 31/10/2024 Signature:


LAB#06
TITLE: Exploring CUDA C Programming: Convolution, Fork and Pthread.

Introduction:
Convolution is a mathematical operation that combines two functions or sets of data to
produce a third function, which shows how one function modifies or “shapes”; the other. In
simple terms, it’s a way to apply a filter (a small set of numbers) to data (like an image,
audio, or sequence) to detect patterns, smooth, or emphasize certain parts of the data.

How Convolution Works:


Imagine you have:
1. Data: This could be an image (a grid of pixels) or a signal (a sequence of numbers).
2. Filter (or Kernel): A smaller set of numbers that you “slide”; across the data.

To perform convolution:
1. Place the filter on top of part of the data.
2. Multiply each element in the filter with the corresponding element in the data.
3. Sum up all these multiplied values to get a single new value.
4. Slide the filter to the next position and repeat the process, creating a new
“transformed”; version of the data.

Simple Example:
For an image, let’s say the filter is designed to find edges. When you slide this filter across
the image, it will produce high values where there’s an edge (a big change in pixel values),
allowing you to detect edges in the output.

Why Use Convolution?


Convolution helps in:

1. Edge detection (in images) or pattern detection (in sequences).


2. Smoothing data by averaging neighbouring values, which removes noise.
3. Feature extraction for machine learning, where it helps identify important
4. characteristics in data like shapes, textures, or trends.
Convolution is widely used in fields like computer vision, natural language processing, and
signal processing because it efficiently captures and highlights patterns in data.
1D Convolution:
1D Convolution is a type of convolution operation applied to one-dimensional data,
commonly used in processing sequential data such as audio signals, time series, and textual
data. The purpose of 1D convolution is to detect patterns in data sequences by using a
kernel (or filter) that slides over the input sequence, performing element-wise multiplication
and summing the results.

How 1D Convolution Works:


In a 1D convolution operation:
1. A kernel of a specific size (e.g., length 3 or 5) is defined. The kernel is a small array of
weights used to capture certain features of the input sequence.
2. The kernel slides across the input sequence with a defined stride, which is the step
size of the kernel’s movement.
3. At each position, the kernel and the input values at that position are multiplied
element-wise, and the resulting values are summed up to produce a single output
value for that position.
4. The output is a new sequence (often called a feature map) that represents extracted
features from the original sequence.
For instance, in audio signal processing, 1D convolutions are used to detect patterns in
waveforms, while in natural language processing, they help recognize patterns in word
sequences (like n-grams or phrase structures).

Key Parameters:
1D Convolution involves several key parameters:

• Kernel Size: Determines the length of the kernel. A larger kernel captures more
context in the input.
• Stride: Controls how much the kernel shifts at each step. A stride of 1 moves the
kernel one position at a time, while a stride greater than 1 skips positions.
• Padding: Optional padding adds extra values (often zeros) at the sequence’s
boundaries to control the output size.

Applications of 1D Convolution:
1D convolutions are commonly used in:

• Audio Processing: Recognizing patterns in sound waves, speech recognition, etc.


• Time Series Analysis: Detecting trends or anomalies in sequences over time.
• Text Processing: Feature extraction from word embeddings or other sequential
language data, useful in NLP tasks like sentiment analysis or classification.
2D Convolution:
2D Convolution is a convolution operation applied to two-dimensional data, most commonly
images. In a 2D convolution, a small matrix called a filter or kernel slides over an image,
applying a mathematical operation to detect specific patterns, such as edges, textures, or
colors.

How 2D Convolution Works:


1. Image and Kernel: Imagine an image represented by a grid of pixels (each with a
brightness or color value), and a kernel, a smaller grid of weights (often 3x3, 5x5,
etc.) designed to capture specific features.
2. Sliding the Kernel: Place the kernel over a section of the image. Multiply each value
in the kernel with the corresponding pixel value beneath it, then sum the results to
get a single output value.
3. Repeat Across the Image: Slide the kernel across the image, repeating the process,
to create a new grid of values (a feature map), where each value represents the
detected pattern at that position.

Example Use: Edge Detection:


A kernel designed for edge detection will highlight parts of the image where there’s a
sudden change in pixel values (like where a color changes sharply from dark to light). By
applying this kernel, the output will contain high values at edges, effectively making the
edges stand out.

Key Parameters:
In 2D convolution, you can control:
1. Kernel Size: Defines the area the kernel covers, like 3x3 or 5x5. Larger kernels
capture more context.
2. Stride: Controls how much the kernel moves at each step. A stride of 1 means
moving one pixel at a time, while higher strides skip pixels.
3. Padding: Adds extra pixels (often zeros) around the image edges, allowing the kernel
to cover more of the image.

Applications of 2D Convolution:
2D convolution is widely used in computer vision tasks, including:

• Image Classification: Recognizing objects, faces, or scenes.


• Image Processing: Tasks like blurring, sharpening, and noise reduction.
• Feature Detection: Finding specific features, such as edges, textures, or shapes.
2D Convolution is fundamental to Convolutional Neural Networks (CNNs), where it helps
automatically learn and detect features at different levels, allowing models to make sense of
visual data.
Lab Tasks:

Code and Output:


Task2:

Code and Output:


Task3:

Code and Output:


Task4:

Code and Output:


Conclusion:
In this lab, we explored parallel programming concepts through the implementation of 2D
array and matrix operations using CUDA in C, focusing on convolution, parallel forks, and
pthreads. By leveraging CUDA’s parallel processing capabilities, we achieved significant
performance improvements in executing computationally intensive tasks on a GPU.
Implementing convolution on a 2D array highlighted the differences in processing times
between CPU and GPU, emphasizing the advantages of parallelization for large datasets.
Furthermore, using fork and pthread libraries in C demonstrated the effectiveness of parallel
processing in multicore CPU environments, even though the performance gains were more
pronounced in the GPU-based CUDA implementation. This lab provided hands-on
experience with fundamental parallel programming concepts and introduced the practical
challenges of synchronization and resource management, offering a solid foundation for
further exploration in high-performance computing.

You might also like