Plants 12 00483
Plants 12 00483
Article
Design and Development of a Low-Cost UGV 3D Phenotyping
Platform with Integrated LiDAR and Electric Slide Rail
Shuangze Cai 1,2 , Wenbo Gou 2 , Weiliang Wen 2,3 , Xianju Lu 2 , Jiangchuan Fan 2,4, * and Xinyu Guo 1,2,3, *
Abstract: Unmanned ground vehicles (UGV) have attracted much attention in crop phenotype
monitoring due to their lightweight and flexibility. This paper describes a new UGV equipped
with an electric slide rail and point cloud high-throughput acquisition and phenotype extraction
system. The designed UGV is equipped with an autopilot system, a small electric slide rail, and
Light Detection and Ranging (LiDAR) to achieve high-throughput, high-precision automatic crop
point cloud acquisition and map building. The phenotype analysis system realized single plant
segmentation and pipeline extraction of plant height and maximum crown width of the crop point
cloud using the Random sampling consistency (RANSAC), Euclidean clustering, and k-means
clustering algorithm. This phenotyping system was used to collect point cloud data and extract plant
height and maximum crown width for 54 greenhouse-potted lettuce plants. The results showed
that the correlation coefficient (R2) between the collected data and manual measurements were
0.97996 and 0.90975, respectively, while the root mean square error (RMSE) was 1.51 cm and 4.99 cm,
respectively. At less than a tenth of the cost of the PlantEye F500, UGV achieves phenotypic data
acquisition with less error and detects morphological trait differences between lettuce types. Thus, it
Citation: Cai, S.; Gou, W.; Wen, W.;
could be suitable for actual 3D phenotypic measurements of greenhouse crops.
Lu, X.; Fan, J.; Guo, X. Design and
Development of a Low-Cost UGV 3D Keywords: 3D phenotyping platform; electric slide rail; LiDAR; low-cost UGV; point cloud processing
Phenotyping Platform with
Integrated LiDAR and Electric Slide
Rail. Plants 2023, 12, 483. https://
doi.org/10.3390/plants12030483 1. Introduction
Academic Editor: Georgios Plant phenotypes are recognizable physical, physiological, and biochemical charac-
Koubouris teristics and traits that arise in part or whole due to the interaction of genes with the
environment [1,2]. Plant phenomics has gradually become a key research area in basic
Received: 9 December 2022
and applied plant science in the past two decades. Nevertheless, inadequate phenotype
Revised: 12 January 2023
detection technology is one of the major bottlenecks in crop breeding [3]. Traditional
Accepted: 18 January 2023
phenotype detection methods are labor-intensive, time-consuming, and subjective, with
Published: 20 January 2023
no uniform standards [4]. Therefore, machines that can replace manual plant phenotype
detection are needed. Many crop phenotype information acquisition platforms have been
recently developed in China and abroad. Based on the growth scenarios of crops, the plat-
Copyright: © 2023 by the authors. forms can be divided into field phenotyping and indoor phenotyping platforms [5]. Indoor
Licensee MDPI, Basel, Switzerland. phenotyping ones include platforms such as Crop 3D developed by the Institute of Botany,
This article is an open access article Chinese Academy of Sciences, and Scanalyzer 3D developed by LemnaTec, Germany, and
distributed under the terms and the outdoor phenotyping platforms include Field Scanalyzer from Rothamsted Research
conditions of the Creative Commons Center, UK, and LQ-Fieldpheno from Agricultural Information Technology Research Cen-
Attribution (CC BY) license (https:// ter, Beijing, China. The efficiency and quality of phenotype information acquisition have
creativecommons.org/licenses/by/ been greatly improved due to the rapid development of sensor technology and equipment
4.0/).
for plant phenotypes [6]. As a result, breeding programs focused on feeding billions of
people worldwide have significantly improved [7].
The major phenotype acquisition devices include orbital [8], robotic [9], vehicle-
mounted [10], suspension [11], and unmanned aerial vehicle-mounted [12]. However,
these phenotyping devices, including vehicle-mounted and robotic phenotyping devices,
can easily interfere with or even crush crops due to the complex outdoor conditions. As a
result, their use is often characterized by a large amount of data, low ground resolution,
much additional information (geographic location, light, temperature, water, air, and other
environmental factors), non-uniform acquisition standards, high data uncertainty, low re-
peatability, and high timeliness [13]. The indoor phenotyping platform can simulate diverse
crop growth conditions when combined with environmental control equipment (controlled
greenhouse and artificial climate chamber) for the assessment of phenotypic plasticity and
stability, identification of key phenotypic traits (yield, quality, and various resistance indi-
cators) in all aspects, and obtaining statistically significant research conclusions. Therefore,
they can achieve precise regulation, graded simulation, and automated precision collection,
which cannot be easily achieved in outdoor environments [14]. Therefore, indoor pheno-
typic monitoring technologies are suitable for accurate and graded simulation and targeted
crop growth and development research under complex experimental conditions [15].
Nevertheless, machine vision methods can be more accurate and effective in mea-
suring key growth parameters (plant height and maximum crown width) of common
crops, especially leafy vegetables [16]. However, the special structure of plants and the
complex environment limit the high precision of plant phenotypic parameters through 2D
images [17]. Therefore, the 3D structure is crucial for assessing plant growth status. Fur-
thermore, studies have shown that equipment and technology for acquiring 3D phenotypes
of crop canopies are crucial for quality breeding, scientific cultivation, and fine manage-
ment [18,19]. Various devices and methods have been used to assess crop 3D information
based on the principles of binocular stereo vision, structured light, Time of Flight (ToF), and
multi-view stereo reconstruction (MVS). For example, Song et al. [20] obtained 3D point
clouds of horticultural crops and achieved surface reconstruction based on binocular stereo
vision. Hui et al. [21] reconstructed 3D plant point cloud models of cucumber, pepper, and
eggplant and calculated phenotypic parameters, such as leaf length, leaf width, leaf area,
plant height, and maximum canopy width based on multi-view stereo vision method and
laser scanning method. LiDAR is widely used to acquire crop phenotypic information due
to its high accuracy and fast scanning speed. Zheng et al. [22] also obtained 3D canopy
point cloud data of trees and estimated their leaf area index using ground-based laser
scanning. Sun et al. [23] extracted individual tree crowns and canopy width from aerial
laser scanning data. Zhang et al. [24] analyzed the dynamic phenotypic changes of trees
under wind disturbance using LiDAR.
In summary, UGVs equipped with LiDAR systems have received much attention due
to their flexibility in monitoring crop phenotypes and high accuracy in the 3D reconstruction
of crops. For example, UGVs equipped with LiDAR systems have been used to measure
crop nitrogen status [25], estimate above-ground biomass [26], and measure planting
density [27]. The commonly used orbital overhead travel phenotyping platform is limited
by its high cost and immobility. Moreover, it lacks accuracy during sensor acquisition due
to the vibration caused by the uneven ground when the UGV is moving. In this study, a
new UGV phenotype platform equipped with an electric slide rail and phenotype data
acquisition-analysis pipeline was developed using electric slide rail and LiDAR for accurate
data acquisition. Furthermore, the 3D reconstruction and growth parameter measurement
of lettuce grown in greenhouse pots were conducted. This study aimed to establish a
low-cost automated crop growth non-destructive measurement system with good accuracy
and practicality.
a low-cost automated crop growth non-destructive measurement system with good accu-
racy and practicality.
Plants 2. Methods
2023, 12, 483 3 of 18
(a) (b)
(c) (d)
Figure 2.
Figure 2. Hardware
Hardwarestructure
structureand
andphysical structure
physical of of
structure thethe
UGV. (a) (a)
UGV. Three-dimensional view;
Three-dimensional (b)
view;
decomposition diagram; (c) real view of the UGV collecting data; (d) LiDAR installation diagram.
(b) decomposition diagram; (c) real view of the UGV collecting data; (d) LiDAR installation diagram.
(4) Part
(2) Part BDrepresents
representsthe theelectric
controlslide
box and battery module
rail installed of the moving
at the bottom part ofItthe
of the chassis. is
vehicle. A to
connected symmetrical layout
the industrial with
control four-wheel
computer drive
(IPC) is used
through to control
rs485 the movement
to the USB. It can control of
the movement
the platform. An independent
direction, speed,brushless motorposition
and start/stop drives eachof thewheel. Meanwhile,
slide rail through the four inde-
control
pendent motors steer
software on the computer. the four wheels. The UGV adopts two steering schemes: four-wheel
steering withCfront
(3) Part and rear
represents thewheels
LiDAR in model
the same direction
VLP-16 and front-wheel
(Velodyne, CA, USA, steering
Silicon(Figure
Valley,
3). The
CA, USA)steering scheme
installed can
on the be switched
electric using
slide rail. Thisa remote control,
part contains depending on
16,360-degree thelines,
scan sce-
nario. The turning
horizontal radiusangle
measurement is relatively small
resolution (0.1 ◦ to 0.4
when ◦ ), vertical
using front and rear-wheel steering
measurement angle range and
can30complete
of degrees, theanddirectional
angle resolution of 2◦in
translation a narrow
. The LiDARspace. However,
is installed theslide
on the turningrail radius
at the
is larger
height of when
1 m fromusing
the front-wheel
ground. Thesteering, and the to
IPC is connected orientation
the LiDARofthrough
the vehicle can be
Ethernet, and ad-
it
justed.
can The the
control machine
LiDARisandcontrolled by an
collect the Arduino
data acquiredcontrol
from the board,
LiDARfoursensor.
absolute encoders,
and an(4) IPC,
Part which
D represents the control box
form a closed-loop andsystem.
control battery The module of the
IPC can moving
control the part of the
steering of
vehicle.
the four A symmetrical
wheels throughlayout with four-wheel
the encoder drive iscontrol
and the Arduino used toboard.
controlThe theIPCmovement
controls
of
UGVthemovement
platform. by Ancontrolling
independent the brushless
drive motors motor drives
of the four each
wheels.wheel.
RS485Meanwhile,
communication four
independent
is used among motors steer the
the encoder, four wheels.
driver, ArduinoThe UGVboard,
control adopts two
and steering
IPC. schemes:
The whole vehiclefour-
is
wheel
powered steering
by two with front and
lead-acid rear wheels
batteries in the same
and a power direction and front-wheel steering
regulator.
(Figure 3). The steering scheme can be switched using a remote control, depending on the
scenario. The turning radius is relatively small when using front and rear-wheel steering
and can complete the directional translation in a narrow space. However, the turning
radius is larger when using front-wheel steering, and the orientation of the vehicle can be
adjusted. The machine is controlled by an Arduino control board, four absolute encoders,
and an IPC, which form a closed-loop control system. The IPC can control the steering
of the four wheels through the encoder and the Arduino control board. The IPC controls
UGV movement by controlling the drive motors of the four wheels. RS485 communication
Plants 2023, 12, 483 5 of 18
(a) (b)
Figure 3. Figure 3. Schematic
Schematic diagram ofdiagram of two
two steering steering(a)
schemes. schemes. (a) Four-wheel
Four-wheel co-steering; co-steering;
(b) front- (b) fr
steering.
wheel steering.
each point P in the kth frame is being processed, P’s coordinates can be translated along
the vector d to obtain the following equation:
P0 = P + d (1)
where, 0
x x αx
Plants 2023, 12, x FOR PEER REVIEW y 0 y0 αy 7 of 18
P=
z , P = z0 , d =
αz (2)
1 1 1
The transformation process from P to P’ can be expressed as P’=TP, using the following
transformation matrix T: 100α
0 10 α
T = T(α , α , α )= 1 0 0 αx α (3
001
0 1 0 αy
T = T αx , αy , αz = 0 0 10α0 0 1
(3)
z
Since the LiDAR is moving in the z-axis direction 0 0 0 1 towards the sensor, Equation (3
can be expressed as follows:
Since the LiDAR is moving in the z-axis direction towards the sensor, Equation (3) can
be expressed as follows: ka
α = 0, α = 0, αka=
αx = 0, αy = 0, αy = 15
15
Each point
Each pointPPofofeach framecan
each frame canbebe derived
derived fromfrom the actual
the actual spatialspatial coordinates.
coordinates. A A
Velodyne
Velodynedata
dataprocessing softwarewas
processing software was developed
developed accordingly.
accordingly. The method
The method of uniform
of uniform
superposition was
superposition wasused
usedtoto stitch eachframe
stitch each frameintointo a dense
a dense pointpoint
cloudcloud
of UGV offixed-point
UGV fixed-poin
blocks.
blocks. TheThe Velodynedata
Velodyne data processing
processingflow is shown
flow in Figure
is shown 6.
in Figure 6.
(a) (b)
Figure 6. Velodyne
Figure 6. Velodynedata
dataprocessing flow.(a)(a)
processing flow. Velodyne
Velodyne raw raw
data.data. (b) Dense
(b) Dense pointofcloud
point cloud UGV of UGV
fixed-point block.
fixed-point block.
2.5. 2.5.
PointPoint Cloud Processing and Phenotype Estimation
Cloud Processing and Phenotype Estimation
An automated processing pipeline was developed using some library functions in
An automated
open3d. processing
Python language, Visualpipeline was
Studio 2019 developed
IDE, using some
and the automated library
processing functions in
pipeline
open3d. Python
were used language, the
to post-process Visual
blockStudio 2019 IDE,
point clouds and thecrop
and perform automated
phenotypeprocessing
estimation.pipeline
were usedall
Firstly, to block
post-process the block
point clouds acquiredpoint clouds
by UGV andcrop
in the perform cropspliced,
strip were phenotype estimation
then nor-
malized to remove the noise, and the ground was fitted. Each crop was then
Firstly, all block point clouds acquired by UGV in the crop strip were spliced, then norsegmented
using to
malized a clustering
remove thealgorithm,
noise,and
andthethephenotypic
ground wasparameters, including
fitted. Each cropplant
washeight
then and
segmented
maximum crown width, were extracted for each crop. The point cloud processing pipeline
using a clustering algorithm, and the phenotypic parameters, including plant height and
is shown in Figure 7.
maximum crown width, were extracted for each crop. The point cloud processing pipeline
is shown in Figure 7.
Plants 2023,
Plants 2023, 12,
12, 483
x FOR PEER REVIEW 88 of 18
of 18
first before using this algorithm for registration. Each block of the data acquired by the
UGV has an equal spacing of 190 cm with an overlap of 30% between each adjacent two
blocks. Therefore, coarse registration can be completed by moving the kth block 190 (k − 1)
cm in the z-axis direction. Moreover, the ICP algorithm is based on the least squares
method, which finds the nearest neighbors based on certain constraints to calculate the
best registration parameters, i.e., the rotation matrix R and the translation vector t, which
is the minimum value of the error function. The error function E (R, t) can be expressed
as follows:
1 n
E(R, t) = ∑i=1 kqi − (Rpi + t)k2 (4)
n
where n, qi, R, and t represent the number of nearest neighbor pairs, a point in the target
point cloud P, the nearest point in the source point cloud Q corresponding to pi, the rotation
matrix, and the translation vector, respectively. The ICP algorithm is implemented as
follows:
(1) The set of points pi ∈ P in the target point cloud P is selected;
(2) The set of points qi corresponding to pi in the source point cloud Q is identified
(satisfying qi ∈ Q such that is the minimum value);
(3) The rotation matrix R and translation vector t are obtained by calculating the
relationship between the corresponding point sets such that the error function is minimized;
(4) pi is transformed, and a new set of corresponding points is obtained;
(5) The average distance d between the new pi and its corresponding point set qi is
calculated. The distance d is calculated as follows:
1 n
n ∑ i=1 i
d= kp − qi k2 (5)
(6) The calculation is stopped if d is less than the given threshold or greater than the
set number of iterations; otherwise, step (2) and the subsequent processes are re-executed
until the convergence conditions are met. Herein, the registration of the two point clouds
was simultaneously performed (Figure 7b), which were simultaneously added to the global
point cloud to obtain a complete point cloud of the crop strip. The complete point cloud of
the strip is shown in Figure 7c.
where,
A = (Y2 − Y1) (Z3 − Z1) − (Z2 − Z1) (Y3 − Y1)
B = (Z2 − Z1) (X3 − X1) − (X2 − X1) (Z3 − Z1)
C = (X2 − X1) (Y3 − Y1) − (Y2 − Y1) (X3 − X1)
D = −(AX1 + BY1 + CZ1)
The distance L from any point in space (X0, Y0, and Z0) to this plane was calculated
as follows:
|AX0 + BY0 + CZ0 + D|
L= p (7)
A2 + B2 + C2
A point was inside the model when L between a point and this hypothetical plane
was ≤∆T1. The number of interior points of the model was recorded by iterating through
N-3 points instead of the initial sampled three points. The number of interior points of the
model was obtained in the same way by randomly sampling three points and constructing a
planar model (Iterating 1 time according to this random sampling method). The probability
of producing a reasonable result increases with an increasing number of iterations. Finally,
the ground model with the highest number of interior points was selected as the best fit
based on the number of interior points of each model. The ground detection results are
shown in Figure 7d. The red points in the figure represent ground points.
This algorithm can only partition the single crop where there is no overlap of leaves
between two pots. Euclidean clustering cannot partition a single crop where some crops
have large leaf growth and are close to each other. In such cases, the K-means clustering
algorithm was used for partitioning. The K-means algorithm belongs to the division
clustering algorithm, where the mean value of all objects in the cluster represents the center
of each cluster. Its input is the number of clusters (K) with the data set containing n objects
(D), while the output is the set of K clusters.
The algorithm flow is shown below:
(1) Choose any K objects from D as the initial clustering centers.
(2) Assign each object to the most similar cluster based on the mean value of the objects
in the cluster.
(3) Update the cluster mean, i.e., the mean of the objects in each cluster is recalculated.
(4) Repeat steps (2) and (3) until the clusters do not change.
The K-means clustering method can determine the number of clusters and initial
cluster centers in advance. In this study, four crops could not be partitioned by Euclidean
clustering (Figure 7e), and thus K-means clustering algorithm was used for partitioning
(K = 4). The partitioning results are shown in Figure 7f.
Plants 2023, 12, 483 11 of 18
3. Materials
The experiments were conducted in a joint greenhouse of the Beijing Academy of Agri-
culture and Forestry (39◦ 560 N, 116◦ 160 E). The lettuce planting area measured 10 m × 40 m.
Six lettuce types (C, W, R, S, B, and L representing Crisphead lettuce, Wild lettuce, Romaine,
Stem lettuce, Butterhead lettuce, and loose-leaf lettuce) were grown in pots. Three varieties
of each type were planted, totaling 18 varieties with 3 replicates for each variety. One lettuce
plant was planted per pot. The pots were under normal water and fertilizer management.
The data were acquired on 11, 12, and 13 November 2021. We obtained 54-point cloud
data of potted plants. A commercial mobile laser 3D plant phenotyping platform Plant-
Eye F500 developed by Phenospex B.V. (Heerlen, the Netherlands), was also used (http:
fertilizer management. The data were acquired on 11, 12, and 13 November 2021.
tained 54-point cloud data of potted plants. A commercial mobile laser 3D plant
typing platform PlantEye F500 developed by Phenospex B.V. (Heerlen, the Nethe
Plants 2023, 12, 483
was also used (http://phenospex.com/products/plant-phenotyping/science-plant 12 of 18
laser-scanner/, accessed on 15 October 2021) to obtain point cloud data. The measu
principle and physical installation of this sensor are shown in Figure 8. Furtherm
//phenospex.com/products/plant-phenotyping/science-planteye-3d-laser-scanner/,
plant height and maximum crown width were extracted from the obtained ac- poin
cessed on 15 October 2021) to obtain point cloud data. The measurement principle and
data and physical
compared withof the
installation manual
this sensor measurements
are shown throughthethe
in Figure 8. Furthermore, point
plant heightcloud
and pro
and analysis pipeline.
maximum The were
crown width extraction
extracted results are shown
from the obtained in Table
point cloud 1. compared
data and The acquired
with the manual measurements through the point cloud processing and analysis pipeline.
strips were arranged according to the width of the UGV since the width of the UG
The extraction results are shown in Table 1. The acquired lettuce strips were arranged
fixed. according to the width of the UGV since the width of the UGV was fixed.
(a) (b)
Figure 8. PlantEye F500 measurement
Figure 8. PlantEye F500 measurement principle
principle andand physical
physical installation
installation diagram. (a)diagram.
The 3D laser(a) The
scan sensor. The red and blue lines represent the laser line and reflection
scan sensor. The red and blue lines represent the laser line and reflection of the of the laser after projecting
laser after pr
to the plant to be received by the cmos, respectively. (b) The sensor mounted on the overhead orbital
to the plant to be received by the cmos, respectively. (b) The sensor mounted on the overhea
system for easy movement.
system for easy movement.
Table 1. Performance and cost comparison between UGV-LiDAR phenotyping system and Plant-
Table 1. Performance
Eye F500. and cost comparison between UGV-LiDAR phenotyping system and P
F500. UGV-LiDAR Phenotyping
Parameter Name PlantEye F500
System
Parameter Name Flux UGV-LiDAR Phenotyping System
810 plants/h 1020 plants/h PlantEye F5
Cost $11,780 $147,000
Flux
Point cloud density
810 plants/h
380,000 points/plant
1020 plants
100,000 points/plant
Cost processing time
Pipelining $11,780
12,628 ms/plant 3157 ms/plant $147,000
Point cloud density 380,000 points/plant 100,000 points
4. Results
Pipelining
4.1. processing time
Point Cloud Quality 12,628 ms/plant 3157 ms/pla
The point cloud data of the potted lettuce were obtained using the UGV-LiDAR
4. Results phenotyping platform and PlantEye F500. The single point cloud of each lettuce plant was
segmented through the developed point cloud processing pipeline. The visualization of the
4.1. Point single-point
Cloud Qualitycloud of the lettuce plants obtained by the two platforms is shown in Figure 9.
Although the lettuce point cloud obtained by PlantEye F500 showed a better view
The point cloud
due to the RGB data of theapotted
information, lettuce
part of the leaf maywere obtained
be missing using
in reality. Thethe UGV-LiDA
obtained
notyping leaf
platform
point cloudand
onlyPlantEye
had the pointsF500.
of theThe
uppersingle
surface point cloud
leaves with only of each
a thin layerlettuce
and pla
did not provide information on the lower part of the obscured
segmented through the developed point cloud processing pipeline. The visualiza leaves. The point cloud of
single lettuce obtained by the designed phenotype platform contained the echo intensity
the single-point cloud
information. of the
The green lettuce
shade plants
in the figure obtained
represents byecho
the laser theintensity.
two platforms is shown
However, this
ure 9. method did not provide RGB information. Nevertheless, the platform obtained more leaf
points than the leaf points obtained by PlantEye F500, and contained the lettuce leaf points
Although the lettuce point cloud obtained by PlantEye F500 showed a bett
that are not severely obscured. The thickness of individual leaves obtained by the platform
due to the RGB information, a part of the leaf may be missing in reality. The obtain
point cloud only had the points of the upper surface leaves with only a thin layer a
not provide information on the lower part of the obscured leaves. The point cloud o
lettuce obtained by the designed phenotype platform contained the echo intensit
Plants 2023, 12, x FOR PEER REVIEW 13 of 18
points than the leaf points obtained by PlantEye F500, and contained the lettuce leaf points
Plants 2023, 12, 483 that are not severely obscured. The thickness of individual leaves obtained by the platform
13 of 18
also increased compared with the thickness obtained by PlantEye F500. PlantEye F500
uses a low-powered single-line laser with weak penetration and thus can only detect the
upper leaves. Meanwhile,
also increased compared withthethe
UGV-LiDAR phenotyping
thickness obtained platform
by PlantEye uses a more
F500. PlantEye powerful
F500 uses
a low-powered
16-line LiDAR and single-line laser with
can handle weakechoes,
multiple penetration
andand
thusthus
cancan only the
detect detect the upper
obscured leaves.
leaves. Meanwhile, the UGV-LiDAR phenotyping platform uses a more powerful 16-line
It can also detect the reflected light from the laser within a single leaf, thus increasing the
LiDAR and
thickness cansingle
of the handleleaf
multiple
pointechoes,
cloud.and thus can detect
Therefore, the obscured
the point cloud ofleaves. It can
a single also plant
lettuce
detect the reflected light from the laser within a single leaf, thus increasing the thickness
obtained by UGV-LiDAR phenotyping platform had better point cloud integrity than that
of the single leaf point cloud. Therefore, the point cloud of a single lettuce plant obtained
obtained by PlantEye
by UGV-LiDAR F500, which
phenotyping platformhas
hadsome
betteradvantages in the phenotype
point cloud integrity detection of
than that obtained
canopy 3D structure
by PlantEye outerhas
F500, which contour of crops. in the phenotype detection of canopy 3D
some advantages
structure outer contour of crops.
Figure 9. 9.
Figure Comparison
Comparisonof
ofthe
the single-point cloudofof
single-point cloud lettuce
lettuce plants
plants acquired
acquired by UGV-LiDAR
by UGV-LiDAR platform
platform
andand
PlantEye.
PlantEye.
0.90975 and RMSE: 4.99 cm). Correlation analysis showed that both systems could accu-
rately
measuremeasure the plant
the plant heightheight and maximum
and maximum canopy
canopy width
width ofvegetables.
of the the vegetables. However,
However, the
the developed UGV-LiDAR phenotyping system could estimate plant height
developed UGV-LiDAR phenotyping system could estimate plant height and maximum and maxi-
mum
crowncrown
widthwidth more accurately
more accurately than
than the the PlantyEye
PlantyEye F500 system.
F500 system.
(a) (b)
(c) (d)
Figure
Figure 10.
10. Comparison
Comparison of of plant
plant height
height and
and maximum
maximum crown
crown width
width extracted
extracted using
using the
the point
point cloud
cloud
processing pipeline. (a) Linear fit of plant height estimated by point cloud obtained by UGV-LiDAR
processing pipeline. (a) Linear fit of plant height estimated by point cloud obtained by UGV-LiDAR
phenotyping system against manual measurements. (b) Linear fit of maximum crown width esti-
phenotyping system against manual measurements. (b) Linear fit of maximum crown width esti-
mated by point cloud obtained by UGV-LiDAR phenotyping system against manual measurements.
mated
(c) byheight
Plant point cloud obtained
estimated by UGV-LiDAR
by point phenotyping
cloud obtained system
by PlantEye against
against manual
manual measurements.
measurements. (d)
(c) Plant height estimated by point cloud obtained by PlantEye against manual measurements.
Linear fit of the maximum canopy width estimated by the point cloud obtained by PlantEye against
(d) Linear
manual fit of the maximum canopy width estimated by the point cloud obtained by PlantEye
measurements.
against manual measurements.
4.3. Performance and Cost
4.3. Performance and Cost
PlantEye F500 is a single-line laser mounted on the orbital overhead travel pheno-
PlantEye F500 is a single-line laser mounted on the orbital overhead travel phenotyp-
typing platform and was established in the greenhouse of Beijing Academy of Agriculture
ing platform and was established in the greenhouse of Beijing Academy of Agriculture and
and Forestry. The moving speed of the orbit was set to 300 cm/min in actual use. The orbit
Forestry. The moving speed of the orbit was set to 300 cm/min in actual use. The orbit
can acquire about 1020 plants per hour without stopping. Although the moving speed of
can acquire about 1020 plants per hour without stopping. Although the moving speed
Velodyne VLP-16 LiDAR on the vehicle track was faster (344 cm/min), it had a slower
of Velodyne VLP-16 LiDAR on the vehicle track was faster (344 cm/min), it had a slower
acquisition speed (810 plants per hour) than PlantEye F500 because the vehicle requires
acquisition speed (810 plants per hour) than PlantEye F500 because the vehicle requires
time to move. The PlantEye F500 costs about $147,000, while the UGV-LiDAR phenotyp-
time to move. The PlantEye F500 costs about $147,000, while the UGV-LiDAR phenotyp-
ing platform costs only $11,780, which is significantly lower. The point cloud pipelining
ing platform costs only $11,780, which is significantly lower. The point cloud pipelining
process
process is run on
is run on aa desktop
desktop workstation
workstation (Intel
(Intel Core
Core i7
i7 processor,
processor, 2.9 GHz CPU,
2.9 GHz CPU, 32 GB
32 GB
RAM, Windows 11 OS). Moreover, the point cloud of lettuce acquired by the UGV-LiDAR
RAM, Windows 11 OS). Moreover, the point cloud of lettuce acquired by the UGV-LiDAR
phenotyping systemwas
phenotyping system wasmore
moredense,
dense,with
withanan average
average of about
of about 380,000
380,000 points
points per plant
per plant and
and a processing time of 12,628 ms, while the point cloud acquired by PlantEye F500
a processing time of 12,628 ms, while the point cloud acquired by PlantEye F500 was sparse, was
Plants 2023, 12, x FOR PEER REVIEW 15 of 18
sparse, with an average of about 100,000 points per plant and a processing time of 3157
ms. This
with indicates
an average that the
of about UGV-LiDAR
100,000 phenotyping
points per plant and asystem was time
processing less efficient in post-
of 3157 ms. This
processing than PlantEye F500. A comparison of performance and cost is shown
indicates that the UGV-LiDAR phenotyping system was less efficient in post-processingin Table
1. PlantEye F500. A comparison of performance and cost is shown in Table 1.
than
Figure 11.
Figure 11. Analysis
Analysis of
of plant
plant height
height and
and maximum
maximum crown
crown width
width differences.
differences. The
The central
central horizontal
horizontal
line indicates the median. The top and bottom of the box indicate the 25th and 75th percentile, re-
line indicates the median. The top and bottom of the box indicate the 25th and 75th percentile,
spectively. The upper and lower solid dots indicate outliers beyond the upper and lower quartiles,
respectively. The upper and lower solid dots indicate outliers beyond the upper and lower quartiles,
respectively; whiskers extend to the extreme non-outliers. The hollow dots represent the mean. Dif-
respectively; whiskersstatistically
ferent letters indicate extend to significant
the extreme non-outliers.
differences The species
between hollow (p
dots represent the mean.
< 0.05).
Different letters indicate statistically significant differences between species (p < 0.05).
5. Discussion
5. Discussion
The comparison
The comparison results
resultsshowed
showedthat thatthe phenotypic
the phenotypic platform
platformandand
phenotypic
phenotypicparam-
pa-
eter extraction pipeline could reliably measure the plant height and
rameter extraction pipeline could reliably measure the plant height and maximum crown maximum crown
width of
width of greenhouse-potted
greenhouse-potted crops
crops using
using thethe synergistic
synergistic operation
operation of
of LiDAR
LiDAR andand track,
track,
thus helping breeders to easily observe and screen good traits in many samples.
thus helping breeders to easily observe and screen good traits in many samples. PlantEye PlantEye
F500 is
F500 is aawell-established
well-establishedcommercial
commercialplant plant3D3Dscanner
scanner used
used forfor automatic
automatic andand continu-
continuous
ous observation of plant growth status [26]. Compared with PlantEye
observation of plant growth status [26]. Compared with PlantEye F500, the UGV had F500, the UGV had
higher estimation accuracy for plant height and lower estimation accuracy
higher estimation accuracy for plant height and lower estimation accuracy for maximum for maximum
crown width.
crown width.Moreover,
Moreover,UGVUGV waswaslessless costly
costly compared
compared withwith
otherother phenotyping
phenotyping plat-
platforms,
forms,
such assuch as the suspension
the suspension phenotyping
phenotyping platformplatform [11], orbital
[11], orbital overheadoverhead
travel travel pheno-
phenotyping
typing platforms
platforms [8], and
[8], and other other immovable
immovable on-site phenotypic
on-site phenotypic platforms.platforms. Furthermore,
Furthermore, these plat-
forms are difficult to disassemble and install to other plots after construction is completed,
Plants 2023, 12, 483 16 of 18
while UGV and UAV platforms can be used in any plot due to their high flexibility [29].
However, the turbulence caused by the rotor blades of UAVs in low flight may significantly
affect the plant canopy structure, leading to a large error when measuring phenological
parameters. Moreover, the resolution obtained by the sensors is usually very low when
the UAVs reach a height where the airflow does not cause disturbance to the plants. The
resolution of the images or point clouds obtained by UGV may be higher than that obtained
by the UAV since the sensors on the UGV phenotyping platform are closer to the top of the
plant canopy.
However, UGVs have some disadvantages. First, the quality of the ground soil limits
UGV movement. For example, wet soil will make the UGV stuck into the mud, leading to
the compaction of the soil and damage to plants. The traditional UGV phenotype platform
continuously moves while collecting data. The movement bumps can affect point cloud
acquisition. Therefore, the LiDAR frames should be stitched to obtain the complete plant
canopy 3D morphology for a high-density point cloud. The jitter of the vehicle may also
lead to difficulties in using wheel odometry, laser odometry, and other ways of SLAM
build map. If the LiDAR frames are not stitched, a high line number of LiDAR would be
needed to obtain a dense point cloud [30]. However, the increased number of LiDAR lines
increases the cost. Furthermore, the maximum canopy estimation accuracy may not be as
high as the plant height estimation accuracy due to the difficulty of manually measuring
the maximum canopy width. Unlike the UGV, which causes contact interference with
larger plants during movement, the new UGV-LiDAR phenotyping platform has a small
electric slide rail for efficient data acquisition while the plants are stationary, leading to
high accuracy. Meanwhile, the LiDAR sensors are closer to the plants, increasing accuracy.
The small slide rail also increases the running accuracy compared with the track of the
orbital overhead travel phenotyping platform.
However, the proposed UGV-LiDAR phenotyping system has some limitations and
unfinished parts: (1) the point cloud data acquired by the UGV-LiDAR phenotyping system
has a large amount of redundancy, leading to inefficiency in post-processing. A denser point
cloud is unsuitable in acquiring these phenotypic parameters of plant height and maximum
crown width since it represents more noise, which affects the extraction of phenotypic data.
(2) Subsequent studies should optimize the UGV-LiDAR phenotyping system using faster
and more accurate vehicle-mounted motorized slide rail and replacing LiDAR with higher
accuracy and lower line count. Further studies should increase resolvable phenotypic
parameters, such as leaf number, width, inclination, area, etc. Besides, deep learning
should be used to improve the speed and accuracy of lettuce single plant identification.
(3) Future studies should design UGV with adjustable height and span for application to
more plants, more scenes, and the acquisition of phenotypic information of plants during
different growth and development periods. (4) Finally, various sensors, such as RGB
camera, thermal infrared camera, and multispectral camera should be used in the future to
monitor more phenotypic information of plants.
6. Conclusions
In this paper, a new UGV phenotype platform equipped with an electric slide rail and
phenotype data acquisition-analysis pipeline was proposed to avoid the effect of movement
bumps on the quality of point cloud acquisition. The platform was developed using 16-line
LiDAR, electric slide rail, and UGV via RTK-GPS for automatic movement to obtain fine
point cloud data. The 3D structure of the lettuce canopy was obtained by the homogeneous
overlay frame method based on uniform speed superposition frames to lettuce. This
method has a cost advantage compared with the traditional UGV high line count LiDAR
point cloud acquisition system. The point cloud was matched and fused by the iterative
nearest point (ICP) algorithm through pipelining to complete the 3D reconstruction of a
whole strip point cloud. Random Sampling Consistency (RANSAC) algorithm, Euclidean
clustering, and k-means clustering algorithm were used to obtain a single lettuce canopy 3D
point cloud. The plant height and maximum crown width were also accurately estimated.
Plants 2023, 12, 483 17 of 18
The new UGV phenotype platform can be used to accurately measure plant height and
maximum crown width with high accuracy and at a reduced cost compared with PlantEye
F500. Therefore, the platform can be used to measure other plant 3D phenotype data after
further expansion of the algorithm. The UGV platform can also be installed with other
sensors to achieve more dimensional phenotype information monitoring.
Author Contributions: Conceptualization, X.G. and J.F.; methodology, S.C., W.W. and J.F.; software,
S.C. and W.G.; validation, S.C.; resources, X.G.; data curation, S.C. and X.L.; writing—original draft
preparation, S.C., W.W., and J.F.; writing—review and editing, S.C. and X.G.; visualization, S.C.;
supervision, X.G.; funding acquisition, X.G. All authors have read and agreed to the published
version of the manuscript.
Funding: This research was funded by the National Key R&D Program (2022YFD2002305), Con-
struction of Beijing Nova Program (Z211100002121065), Collaborative Innovation Center of Beijing
Academy of Agricultural and Forestry Sciences (KJCX201917), and Science and Technology Innova-
tion Special Construction Funded Program of Beijing Academy of Agriculture and Forestry Sciences
(KJCX20210413).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors would like to thank the editor and the anonymous reviewers for
their valuable suggestions to improve the quality of this paper.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Dhondt, S.; Wuyts, N.; Inze, D. Cell to whole-plant phenotyping: The best is yet to come. Trends Plant Sci. 2013, 18, 433–444.
[CrossRef]
2. Watt, M.; Fiorani, F.; Usadel, B.; Rascher, U.; Muller, O.; Schurr, U. Phenotyping: New Windows into the Plant for Breeders. Annu.
Rev. Plant Biol. 2020, 71, 689–712. [CrossRef]
3. Grosskinsky, D.K.; Svensgaard, J.; Christensen, S.; Roitsch, T. Plant phenomics and the need for physiological phenotyping across
scales to narrow the genotype-to-phenotype knowledge gap. J. Exp. Bot. 2015, 66, 5429–5440. [CrossRef]
4. Kim, S.L.; Solehati, N.; Choi, I.C.; Kim, K.H.; Kwon, T.R. Data Management for Plant Phenomics. J. Plant Biol. 2017, 60, 285–297.
[CrossRef]
5. Wu, S.; Wen, W.L.; Wang, Y.J.; Fan, J.C.; Wang, C.Y.; Gou, W.B.; Guo, X.Y. MVS-Pheno: A Portable and Low-Cost Phenotyping
Platform for Maize Shoots Using Multiview Stereo 3D Reconstruction. Plant Phenomics 2020, 2020, 1848437. [CrossRef]
6. Fiorani, F.; Schurr, U. Future Scenarios for Plant Phenotyping. Annu. Rev. Plant Biol. 2013, 64, 267–291. [CrossRef]
7. Tester, M.; Langridge, P. Breeding Technologies to Increase Crop Production in a Changing World. Science 2010, 327, 818–822.
[CrossRef]
8. Virlet, N.; Sabermanesh, K.; Sadeghi-Tehran, P.; Hawkesford, M.J. Field Scanalyzer: An automated robotic field phenotyping
platform for detailed crop monitoring. Funct. Plant Biol. 2017, 44, 143–153. [CrossRef]
9. Shafiekhani, A.; Kadam, S.; Fritschi, F.B.; DeSouza, G.N. Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput
Field Phenotyping. Sensors 2017, 17, 214. [CrossRef]
10. Sun, S.P.; Li, C.Y.; Paterson, A.H.; Jiang, Y.; Xu, R.; Robertson, J.S.; Snider, J.L.; Chee, P.W. In-field High Throughput Phenotyping
and Cotton Plant Growth Analysis Using LiDAR. Front. Plant Sci. 2018, 9, 16. [CrossRef]
11. Kirchgessner, N.; Liebisch, F.; Yu, K.; Pfeifer, J.; Friedli, M.; Hund, A.; Walter, A. The ETH field phenotyping platform FIP: A
cable-suspended multi sensor system. Funct. Plant Biol. 2017, 44, 154–168. [CrossRef]
12. Yang, G.J.; Liu, J.G.; Zhao, C.J.; Li, Z.H.; Huang, Y.B.; Yu, H.Y.; Xu, B.; Yang, X.D.; Zhu, D.M.; Zhang, X.Y.; et al. Unmanned
Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives. Front. Plant Sci. 2017, 8, 1111.
[CrossRef]
13. Roitsch, T.; Cabrera-Bosquet, L.; Fournier, A.; Ghamkhar, K.; Jimenez-Berni, J.; Pinto, F.; Ober, E.S. Review: New sensors and
data-driven approaches—A path to next generation phenomics. Plant Sci. 2019, 282, 2–10. [CrossRef]
14. Zhu, J.Q.; van der Werf, W.; Anten, N.P.R.; Vos, J.; Evers, J.B. The contribution of phenotypic plasticity to complementary light
capture in plant mixtures. New Phytol. 2015, 207, 1213–1222. [CrossRef] [PubMed]
15. Sadras, V.O.; Slafer, G.A. Environmental modulation of yield components in cereals: Heritabilities reveal a hierarchy of phenotypic
plasticities. Field Crops Res. 2012, 127, 215–224. [CrossRef]
Plants 2023, 12, 483 18 of 18
16. Lati, R.N.; Filin, S.; Eizenberg, H. Estimation of Plants’ Growth Parameters via Image-Based Reconstruction of Their Three-
Dimensional Shape. Agron. J. 2013, 105, 191–198. [CrossRef]
17. Zhao, C.J.; Zhang, Y.; Du, J.J.; Guo, X.Y.; Wen, W.L.; Gu, S.H.; Wang, J.L.; Fan, J.C. Crop Phenomics: Current Status and
Perspectives. Front. Plant Sci. 2019, 10, 714. [CrossRef]
18. McCarthy, C.L.; Hancock, N.H.; Raine, S.R. Applied machine vision of plants: A review with implications for field deployment in
automated farming operations. Intell. Serv. Robot. 2010, 3, 209–217. [CrossRef]
19. Jin, S.C.; Su, Y.J.; Wu, F.F.; Pang, S.X.; Gao, S.; Hu, T.Y.; Liu, J.; Guo, Q.H. Stem-Leaf Segmentation and Phenotypic Trait Extraction
of Individual Maize Using Terrestrial LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1336–1346. [CrossRef]
20. Song, Y.; Wilson, R.; Edmondson, R.; Parsons, N. Surface modelling of plants from stereo images. In Proceedings of the Sixth
International Conference on 3-D Digital Imaging and Modeling (3DIM 2007), Montreal, QC, Canada, 21–23 August 2007; pp. 312–319.
21. Hui, F.; Zhu, J.Y.; Hu, P.C.; Meng, L.; Zhu, B.L.; Guo, Y.; Li, B.G.; Ma, Y.T. Image-based dynamic quantification and high-accuracy
3D evaluation of canopy structure of plant populations. Ann. Bot. 2018, 121, 1079–1088. [CrossRef]
22. Zheng, G.; Moskal, L.M. Computational-Geometry-Based Retrieval of Effective Leaf Area Index Using Terrestrial Laser Scanning.
IEEE Trans. Geosci. Remote Sens. 2012, 50, 3958–3969. [CrossRef]
23. Sun, C.X.; Huang, C.W.; Zhang, H.Q.; Chen, B.Q.; An, F.; Wang, L.W.; Yun, T. Individual Tree Crown Segmentation and Crown
Width Extraction From a Heightmap Derived From Aerial Laser Scanning Data Using a Deep Learning Framework. Front. Plant
Sci. 2022, 13, 914974. [CrossRef] [PubMed]
24. Zhang, B.; Wang, X.J.; Yuan, X.Y.; An, F.; Zhang, H.Q.; Zhou, L.J.; Shi, J.G.; Yun, T. Simulating Wind Disturbances over Rubber
Trees with Phenotypic Trait Analysis Using Terrestrial Laser Scanning. Forests 2022, 13, 1298. [CrossRef]
25. Eitel, J.U.H.; Vierling, L.A.; Long, D.S.; Hunt, E.R. Early season remote sensing of wheat nitrogen status using a green scanning
laser. Agric. For. Meteorol. 2011, 151, 1338–1345. [CrossRef]
26. Nguyen, P.; Badenhorst, P.E.; Shi, F.; Spangenberg, G.C.; Smith, K.F.; Daetwyler, H.D. Design of an Unmanned Ground Vehicle and
LiDAR Pipeline for the High-Throughput Phenotyping of Biomass in Perennial Ryegrass. Remote Sens. 2021, 13, 20. [CrossRef]
27. Sanz, R.; Rosell, J.R.; Llorens, J.; Gil, E.; Planas, S. Relationship between tree row LIDAR-volume and leaf area density for fruit
orchards and vineyards obtained with a LIDAR 3D Dynamic Measurement System. Agric. For. Meteorol. 2013, 171, 153–162.
[CrossRef]
28. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256.
[CrossRef]
29. Deery, D.; Jimenez-Berni, J.; Jones, H.; Sirault, X.; Furbank, R. Proximal Remote Sensing Buggies and Potential Applications for
Field-Based Phenotyping. Agronomy 2014, 4, 349–379. [CrossRef]
30. Qiu, Q.; Sun, N.; Bai, H.; Wang, N.; Fan, Z.Q.; Wang, Y.J.; Meng, Z.J.; Li, B.; Cong, Y. Field-Based High-Throughput Phenotyping
for Maize Plant Using 3D LiDAR Point Cloud Generated with a “Phenomobile”. Front. Plant Sci. 2019, 10, 554. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.