OpenCV
OpenCV
(4)
Now, there are a few points you should keep in mind while shifting the image
by tx and ty values.
• Providing positive values for tx will shift the image to the right, and negative values
will shift the image to the left.
• Similarly, positive values of ty will shift the image down, while negative values will
shift the image up.
line(image, start_point, end_point, color, thickness)
circle(image, center_coordinates, radius, color, thickness)
Filled circle: thickness=-1
rectangle(image, start_point, end_point, color, thickness)
ellipse(image, centerCoordinates, axesLength, angle, startAngle, endAngle, color,
thickness)
Half-ellipse:
• Set the endAngle for the blue ellipse as 180 deg
• Change the orientation of the red ellipse from 90 to 0
• Specify the start and end angles for the red ellipse, as 0 and 180 respectively
• Specify the thickness of the red ellipse to be a negative number
ellipse_center = (415,190)
# define the axis point
axis1 = (100,50)
Such kernels can be used to perform mathematical operations on each pixel of an image to
achieve a desired effect (like blurring or sharpening an image).
o Values above the threshold are set to the threshold, while others remain
unchanged:
th, dst = cv2.threshold(src, 127, 255, cv2.THRESH_TRUNC)
4. To Zero Threshold (cv2.THRESH_TOZERO):
o Rule:
o Values below the threshold are set to 0, while others remain unchanged:
th, dst = cv2.threshold(src, 127, 255, cv2.THRESH_TOZERO)
5. Inverse To Zero Threshold (cv2.THRESH_TOZERO_INV):
o Rule:
o Values above the threshold are set to 0, while others remain unchanged:
th, dst = cv2.threshold(src, 127, 255, cv2.THRESH_TOZERO_INV)
EDGE DETECION: SOBEL AND CANNY
X-Direction Kernel
Y-Direction Kernel
• If we use only the Vertical Kernel, the convolution yields a Sobel image, with edges
enhanced in the X-direction
• Using the Horizontal Kernel yields a Sobel image, with edges enhanced in the Y-
direction.
Let and represent the intensity gradient in the and directions respectively. If
and denote the X and Y kernels defined above:
where denotes the convolution operator, and represents the input image.
The final approximation of the gradient magnitude, can be computed as
userdata (optional):
• Any additional data you want to pass to the callback function.
Example:
import cv2
import numpy as np
Exmple:
# Import packages
import cv2
# Read Images
image = cv2.imread("Assets/cards.jpg")
# Make a temporary image, will be useful to clear the drawing
temp = image.copy()
# Create a named window
cv2.namedWindow("Window")
# highgui function called when mouse events occur
cv2.setMouseCallback("Window", drawRectangle)
k=0
# Close the window when key q is pressed
while k != 113:
# Display the image
cv2.imshow("Window", image)
k = cv2.waitKey(0)
# If c is pressed, clear the window, using the dummy image
if (k == 99):
image = temp.copy()
cv2.imshow("Window", image)
cv2.destroyAllWindows()
# Read an image
image = cv2.imread("example.jpg")
# Create a window
cv2.imshow('Image', image)
# Create a trackbar
cv2.createTrackbar('Brightness', 'Image', 100, 200, onChange)
cv2.waitKey(0)
cv2.destroyAllWindows()
import cv2
# Load an image
image = cv2.imread("Assets/cards.jpg", cv2.IMREAD_GRAYSCALE)
# Apply binary threshold
_, binary = cv2.threshold(image, 127, 255, cv2.THRESH_BINARY)
# Find contours using different modes
contours_external, _ = cv2.findContours(binary, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
contours_list, _ = cv2.findContours(binary, cv2.RETR_LIST,
cv2.CHAIN_APPROX_SIMPLE)
contours_tree, hierarchy = cv2.findContours(binary, cv2.RETR_TREE,
cv2.CHAIN_APPROX_SIMPLE)
cv2.waitKey(0)
cv2.destroyAllWindows()
TEMPORAL MEDIAN FILTERING
Temporal median filtering is a technique used in video processing or sequences of images to
reduce noise or remove unwanted transient objects (like flickering or moving elements). The
"temporal" aspect means it operates across frames over time, rather than within a single
image.
How It Works:
1. Input:
o A sequence of images or video frames Ft, Ft−1, Ft−2, …, Ft−n, where t represents
the current time (frame).
o Each pixel location across the frames forms a time series of intensity values.
2. Processing:
o For each pixel location (x,y), collect intensity values from the corresponding
pixel in the last N frames.
o Calculate the median of these pixel values.
o Replace the pixel intensity at (x,y) in the current frame with this median value.
3. Output:
o A new sequence of frames where noise (e.g., flickering or transient objects) is
significantly reduced, while preserving the overall structure of the video.
Applications:
1. Noise Reduction:
o Removes transient noise like salt-and-pepper noise, especially in videos
captured under poor lighting conditions.
2. Background Subtraction:
o Creates a clean background by filtering out moving objects or temporary
occlusions over time.
3. Object Detection:
o By stabilizing the background, it improves accuracy in detecting objects by
highlighting only consistent foreground changes.
EXAMPLE:
import cv2
import numpy as np
return filtered_frames
# Example Usage:
# Read video
cap = cv2.VideoCapture("input_video.mp4")
frames = []
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Convert to grayscale for simplicity
gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frames.append(gray_frame)
cap.release()
out.release()