0% found this document useful (0 votes)
30 views4 pages

Object Grabbing of Robotic Arm Based On OpenMV Module Positioning

Uploaded by

patilsanjay32240
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views4 pages

Object Grabbing of Robotic Arm Based On OpenMV Module Positioning

Uploaded by

patilsanjay32240
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Object Grabbing of Robotic Arm Based on

OpenMV Module Positioning


Jian Wang Haishen Peng
College of Electronic Engineering (College of Artificial College of Electronic Engineering (College of Artificial
2023 2nd International Conference on Artificial Intelligence and Computer Information Technology (AICIT) | 979-8-3503-0295-0/23/$31.00 ©2023 IEEE | DOI: 10.1109/AICIT59054.2023.10277796

Intelligence) Intelligence)
South China Agricultural University South China Agricultural University
Guangzhou, China Guangzhou, China
[email protected] [email protected]

Abstract—Many robotic arms today incorporate machine sampling method is designed for needle tip perception by
vision to enhance the task diversity of robotic arms. In this detecting needle tips in image space, segmenting and
paper a robotic arm grabbing system based on the camera is extracting obstacle areas [3]. Jørgensen et al. used a
introduced. Firstly, the OpenMV module is used to integrate structured light scanner to capture the point cloud of the
the machine vision recognition and position to make the object to be grasped, and by analyzing the point cloud to
robotic manipulator have the function of autonomous determine the picking and placing action, the six-degree-of-
grasping. a normalized cross-correlation template matching freedom robot placement strategy contains several free
algorithm is used to realize the recognition of the simulated parameters to select in a contextual manner [4]. A method
chicken model, and the position information is calculated.
that does not use the traditional verification method of
Secondly, the D-H parameter method is used to describe the
robotic arm hardware structure, and the inverse kinematics
camera coordinates and robot coordinates is proposed, and
equations are obtained. Finally, after the embedded only the image processing and pixel-based visual feedback
microcontroller obtains the position information into six joint control are adopted to realize the robotic arm to find and
angles through solving the inverse kinematics equations, the remove workpieces in a narrow metal box. The pixel
stepper motor drivers control the stepper motors on the joints feedback method mainly calculates the center of gravity of
of the robotic arm by means of the pulse output, so that the the object [5].
robotic arm moves to the chicken model position. The
experimental test shows that the system can complete the II. SYSTEM COMPOSITION
grasping task of the chicken model. The vision system is composed of OpeMV module,
which collects target image information and performs image
Keywords-robotic arm; inverse kinematics; OpenMV; D-H preprocessing, feature extraction and recognition tasks to
obtain target coordinates. The control system is composed of
I. INTRODUCTION
an embedded microprocessor STM32F103 and a stepper
A robotic arm is a robotic device used to simulate the motor driver module, The serial port is used to carry out
movements of a human arm, consisting of several coordinate transformation and inverse kinematics to obtain
interconnected parts, which are controlled by a computer the rotation angle of the motor on the joint, control the
program to achieve movement of the parts on one or more movement of the robotic arm, and realize the target capture.
axes to perform a wide variety of tasks, such as assembling, The system configuration is showed in Fig. 1.
handling or welding.
Robots based on machine vision are increasingly used in
modern production and life. Zhi H Lu, et al designed a robot
that can classify winter dates, including image acquisition
unit and actuator unit, and used YOLOV3 algorithm to
establish a detection model, plus hand-designed features to
calculate the maturity of winter dates, with an accuracy of
97.28%[1]. Binocular stereo vision (Binocular Stereo Vision)
is an important form of machine vision, which is based on
the principle of parallax and uses imaging equipment to
obtain two images of the measured object from different Figure 1 The schematic of system configuration
positions, and obtains three-dimensional geometric A. OpenMV Module
information of the object by calculating the position
OpenMV (Open Machine Vision) is an open source
deviation between the corresponding points of the image.
machine vision project developed by the Chris team using
Based on binocular vision, the tracking and grasping
MicroPython in the United States, the module supporting the
problem of robot from eye to hand was studied [2]. For a
UART(universal asynchronous transceiver transmitter) is
three-degree-of-freedom robotic arm precision control table
equipped with the STM32H743VI ARM Cortex M7
needle tip on the needle tip is proposed, a visual guidance
processor with a frequency of 480 MHz and the camera
arrival method is proposed to adapt to the nonlinear
OV7725 of 1 MB of SRAM and 2MB flash memory. The
movement of the end effecter of the robotic arm and avoid
OpenMV module chosen in the design is made by Singtown
obstacles. The arrival model uses two cameras for image
Technology.
guidance control, using the original image of the object as
input, and guiding the underlying vision-based controller
through incremental motion commands. A neighborhood

979-8-3503-0295-0/23/$31.00 ©2023 IEEE

Authorized licensed use limited to: SHREE DEVI INSTITUTE OF TECHNOLOGY. Downloaded on May 31,2025 at 05:58:13 UTC from IEEE Xplore. Restrictions apply.
B. Robtic Manipulator
The manipulator body adopts AR3 six-axis aluminum
alloy manipulator, which is made by Beijing Era Chaoqun
Technology Co., Ltd. Stepper motors are installed in the six
joints of the robotic arm. The stepper motor adopts a two-
phase 42 stepper motor and a two-phase 57 stepper motor,
and a reducer is also installed to increase the output torque of
the motor. The rotation of the six joints allows the end of the
robotic arm to reach any position in the three-dimensional
working range. The robotic arm is equipped with a robotic
claw at the end of the arm, which needs to be equipped with
a servo to be used. The servo needs to use the duty cycle of
the 50 Hz pulse width modulation PWM (Pulse width
Modulation) signal to control the rotation angle. Fig. 2 is the Figure 3 The template matches the recognition of chicken model
system overall design block diagram.
By the recognition box returned by the search template
function, the upper left vertex coordinates and the length and
width of the box can be obtained, and the upper left vertex
coordinates can be calculated by adding the upper left vertex
coordinates with one-half of the box length and one-half of
the box width, that is, the coordinate position of the
simulated chicken model.

B. Robotic arm Modeling Analysis


The robotic arm model of this system mainly uses the D-
H (Denavit-Hartenberg) method. According to the D-H
Figure 2 The system overall design block diagram parameter rules, the parameters and structure of the robotic
arm system, Table 1 of the D-H parameter table can be
III. MODEL AND METHOD obtained.
A. Target Identification and Position
The recognition algorithm in this project adopts the
normalized cross-correlation NCC (Normalized Cross-
Correlation) template matching algorithm, which is an image
matching algorithm using templates, which compares the
template with each sub-image in the target image and
calculates the similarity between the two to determine the
best matching position. The OpenMV module is used to
collect the target image that needs to be identified, save it as
a .bmp type file, then convert it to a .pgm type file and store
it in the OPENMV module, so that the image can be made
into a template image to match the target image. Use the API
function named find_template (template, threshold, roi,
step=4, search=SEARCH_EX) for template matching, and The robotic arm control system obtains the position
the return value is the upper-left vertex coordinates of the information of the end of the robotic arm through the
bounding box at the matching location and the length and OpenMV module, and solves the joint angle variables of the
width of the bounding box, where template is the template six joints of the robotic arm through the manipulator inverse
image, threshold is the similarity threshold, and roi is the kinematics. According to the piper criterion, the robotic arm
area where the match is made, SEARCH_EX The algorithm has 6 rotating joints, of which the 6th axis at the end
used to search for templates allows for a more detailed intersects the 3 adjacent axes of the 4th and 5th axes at one
search of images, and step is the number of pixels skipped point, so the manipulator must have an analytical solution.
when comparing templates in SEARCH_EX. For target
objects of different sizes and angles, several different
templates can be used to improve the robustness of
recognition. The template matching the recognition of
chicken model is given in Fig. 3
.

Authorized licensed use limited to: SHREE DEVI INSTITUTE OF TECHNOLOGY. Downloaded on May 31,2025 at 05:58:13 UTC from IEEE Xplore. Restrictions apply.
B. Robotic Arm Grabbing Test
 1r11 1
r12 1
r13 1
x
1 1 1 1  Table 3 The grabbed target placed in a Y-coordinate
1 1 3  r21 r22 r23 y
6T = 3T 6T = 1
 r31 1
r32 1
r33 1 
z
 
0 0 0 1
(1)
where in Eq.(1)
1
r1 1 = c 2 3 [ c 4 c 5 c 6 − s 4 s 6 ] − s 2 3 s 5 s 6
1
r2 1 = − s 4 c 5 c 6 − c 4 s 6
1
r3 1 = − s 2 3 [ c 4 c 5 c 6 − s 4 s 6 ] − c 2 3 s 5 c 6
1
r1 2 = − c 2 3 [ c 4 c 5 s 6 + s 4 c 6 ] + s 2 3 s 5 s 6
1 Table 3 shows that the absolute error between the end
r2 2 = s 4 c s s 6 − c 4 c 6 movement position of the robotic arm and the actual position
1
r3 2 = s 2 3 [ c 4 c 5 s 6 + s 4 c 6 ] + c 2 3 s 5 s 6 is less than 36 mm, the relative error is less than 7.4%, and
1 the average success rate of grasping is 86.67%. The actual
r1 3 = − c 2 3 c 4 s 5 − s 2 3 c 5 diagram of the robotic arm grabbing chicken model is
1
r2 3 = s 4 s 5 showed in Fig. 4.
1
r3 3 = s 2 3 c 4 s 5 − c 2 3 c 5
1 (2)
p x = a 2 c 2 + a 3c 23 − d 4 s 23
1
py = d3
1
p z = − a 3 s 23 − a 2 s 2 − d 4 c 23
which ci = cos θ i , si = sin θ i ,and
c23 = cos(θ 2 + θ3 ) = c2 c3 − s2 s3
s23 = sin(θ2 + θ3 ) = c2 s3 + s2c3
(3)
Then the six joint angles of robotic arms can be solved.
IV. TEST AND RESULTS
Figure 4 Actual diagram of the robotic arm grabbing chicken model
The test is divided into two parts, one is the comparison
test of identifying and calculating the coordinates of the V. CONCLUSION
target and the actual coordinates, the other is the comparison
test of the position coordinates of the end of the robotic arm A robotic arm grabbing system based on the camera is
moving with the actual coordinates. The target object is designed in the paper. The OpenMV module is used to
selected as a chicken model. identify the target object and calculate its central coordinates,
then the position information is transmitted to the
A. Target Position Recognition Test manipulator control system by means of the UART serial
port communication, and the six manipulator joints angles
Table 2 Target position recognition are obtained by solving the inverse kinematics equations of
the manipulator, and the manipulator is controlled to grab the
object after obtaining six joint angles. The test shows that the
robotic arm can complete the task of grasping objects well.
ACKNOWLEDGMENT
The authors wish to acknowledge the assistance of the
the organization committee and the valuable advices of the
reviewers. This work is supported by the Guangdong
Province Special Fund for Modern Agricultural Industry
Common Key Technology R&D Innovation Team, the grant
No. is 2023KJ129.

From the table 2, it can be seen that the distance between


the identification coordinates and the actual coordinates is REFERENCES
less than 23 mm at most, and the relative error is less than [1] Zhi H Lu, et al, “Design of a winter-jujube grading robot based on
4.5% at most. machine vision,” Computers and Electronics in Agriculture, vol.
186, 2021, p 106170,
[2] Yi-Chun Du, TaryuiTaryui, Ching-Tang Tsa, Ming-Shyan Wang,
“Eye-to-hand robotic tracking and grabbing based on binocular
vision,” Microsystem Technologies, vol. 27, 2021,, pp.1699–1710.

Authorized licensed use limited to: SHREE DEVI INSTITUTE OF TECHNOLOGY. Downloaded on May 31,2025 at 05:58:13 UTC from IEEE Xplore. Restrictions apply.
[3] Ying Li,Fangbo Qin,Shaofeng Du,De Xu,Jianqiang Zhang, [5] Kohei Miki, Fusaomi Nagata, Takeshi Ikeda, Keigo Watanabe, Maki
“Vision-Based Imitation Learning of Needle Reaching Skill for K, Habib, Molded article picking robot using image processing
Robotic Precision Manipulation,” Journal of Intelligent & Robotic technique and pixel‑based visual feedback control. Artifcial Life and
Systems, vol. 101, 2021, p22. Robotics, 2021, pp. 26:390–395
[4] Troels Bo Jørgensen,et al., “An Adaptive Robotic System for Doing
Pick and Place Operations with Deformable Objects,” Journal of
Intelligent & Robotic Systems, 2019, pp 94:81–100.

Authorized licensed use limited to: SHREE DEVI INSTITUTE OF TECHNOLOGY. Downloaded on May 31,2025 at 05:58:13 UTC from IEEE Xplore. Restrictions apply.

You might also like