0% found this document useful (0 votes)
12 views18 pages

Plants 12 00483

This document presents the design and development of a low-cost unmanned ground vehicle (UGV) equipped with LiDAR and an electric slide rail for 3D phenotyping of crops. The UGV system enables high-throughput and high-precision automatic acquisition of crop point cloud data, demonstrating strong correlation with manual measurements. The platform aims to improve the efficiency of plant phenotype monitoring, particularly for greenhouse crops, at a significantly reduced cost compared to existing systems.

Uploaded by

Raul Rodriguez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views18 pages

Plants 12 00483

This document presents the design and development of a low-cost unmanned ground vehicle (UGV) equipped with LiDAR and an electric slide rail for 3D phenotyping of crops. The UGV system enables high-throughput and high-precision automatic acquisition of crop point cloud data, demonstrating strong correlation with manual measurements. The platform aims to improve the efficiency of plant phenotype monitoring, particularly for greenhouse crops, at a significantly reduced cost compared to existing systems.

Uploaded by

Raul Rodriguez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

plants

Article
Design and Development of a Low-Cost UGV 3D Phenotyping
Platform with Integrated LiDAR and Electric Slide Rail
Shuangze Cai 1,2 , Wenbo Gou 2 , Weiliang Wen 2,3 , Xianju Lu 2 , Jiangchuan Fan 2,4, * and Xinyu Guo 1,2,3, *

1 School of Agricultural Equipment Engineering, Jiangsu University, Zhenjiang 212013, China


2 Beijing Key Lab of Digital Plant, National Engineering Research Center for Information Technology in
Agriculture, Beijing 100097, China
3 Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences,
Beijing 100097, China
4 Beijing PAIDE Science and Technology Development Co., Ltd., Beijing 100097, China
* Correspondence: [email protected] (J.F.); [email protected] (X.G.)

Abstract: Unmanned ground vehicles (UGV) have attracted much attention in crop phenotype
monitoring due to their lightweight and flexibility. This paper describes a new UGV equipped
with an electric slide rail and point cloud high-throughput acquisition and phenotype extraction
system. The designed UGV is equipped with an autopilot system, a small electric slide rail, and
Light Detection and Ranging (LiDAR) to achieve high-throughput, high-precision automatic crop
point cloud acquisition and map building. The phenotype analysis system realized single plant
segmentation and pipeline extraction of plant height and maximum crown width of the crop point
cloud using the Random sampling consistency (RANSAC), Euclidean clustering, and k-means
clustering algorithm. This phenotyping system was used to collect point cloud data and extract plant
height and maximum crown width for 54 greenhouse-potted lettuce plants. The results showed
that the correlation coefficient (R2) between the collected data and manual measurements were
0.97996 and 0.90975, respectively, while the root mean square error (RMSE) was 1.51 cm and 4.99 cm,
respectively. At less than a tenth of the cost of the PlantEye F500, UGV achieves phenotypic data
acquisition with less error and detects morphological trait differences between lettuce types. Thus, it
Citation: Cai, S.; Gou, W.; Wen, W.;
could be suitable for actual 3D phenotypic measurements of greenhouse crops.
Lu, X.; Fan, J.; Guo, X. Design and
Development of a Low-Cost UGV 3D Keywords: 3D phenotyping platform; electric slide rail; LiDAR; low-cost UGV; point cloud processing
Phenotyping Platform with
Integrated LiDAR and Electric Slide
Rail. Plants 2023, 12, 483. https://
doi.org/10.3390/plants12030483 1. Introduction
Academic Editor: Georgios Plant phenotypes are recognizable physical, physiological, and biochemical charac-
Koubouris teristics and traits that arise in part or whole due to the interaction of genes with the
environment [1,2]. Plant phenomics has gradually become a key research area in basic
Received: 9 December 2022
and applied plant science in the past two decades. Nevertheless, inadequate phenotype
Revised: 12 January 2023
detection technology is one of the major bottlenecks in crop breeding [3]. Traditional
Accepted: 18 January 2023
phenotype detection methods are labor-intensive, time-consuming, and subjective, with
Published: 20 January 2023
no uniform standards [4]. Therefore, machines that can replace manual plant phenotype
detection are needed. Many crop phenotype information acquisition platforms have been
recently developed in China and abroad. Based on the growth scenarios of crops, the plat-
Copyright: © 2023 by the authors. forms can be divided into field phenotyping and indoor phenotyping platforms [5]. Indoor
Licensee MDPI, Basel, Switzerland. phenotyping ones include platforms such as Crop 3D developed by the Institute of Botany,
This article is an open access article Chinese Academy of Sciences, and Scanalyzer 3D developed by LemnaTec, Germany, and
distributed under the terms and the outdoor phenotyping platforms include Field Scanalyzer from Rothamsted Research
conditions of the Creative Commons Center, UK, and LQ-Fieldpheno from Agricultural Information Technology Research Cen-
Attribution (CC BY) license (https:// ter, Beijing, China. The efficiency and quality of phenotype information acquisition have
creativecommons.org/licenses/by/ been greatly improved due to the rapid development of sensor technology and equipment
4.0/).

Plants 2023, 12, 483. https://doi.org/10.3390/plants12030483 https://www.mdpi.com/journal/plants


Plants 2023, 12, 483 2 of 18

for plant phenotypes [6]. As a result, breeding programs focused on feeding billions of
people worldwide have significantly improved [7].
The major phenotype acquisition devices include orbital [8], robotic [9], vehicle-
mounted [10], suspension [11], and unmanned aerial vehicle-mounted [12]. However,
these phenotyping devices, including vehicle-mounted and robotic phenotyping devices,
can easily interfere with or even crush crops due to the complex outdoor conditions. As a
result, their use is often characterized by a large amount of data, low ground resolution,
much additional information (geographic location, light, temperature, water, air, and other
environmental factors), non-uniform acquisition standards, high data uncertainty, low re-
peatability, and high timeliness [13]. The indoor phenotyping platform can simulate diverse
crop growth conditions when combined with environmental control equipment (controlled
greenhouse and artificial climate chamber) for the assessment of phenotypic plasticity and
stability, identification of key phenotypic traits (yield, quality, and various resistance indi-
cators) in all aspects, and obtaining statistically significant research conclusions. Therefore,
they can achieve precise regulation, graded simulation, and automated precision collection,
which cannot be easily achieved in outdoor environments [14]. Therefore, indoor pheno-
typic monitoring technologies are suitable for accurate and graded simulation and targeted
crop growth and development research under complex experimental conditions [15].
Nevertheless, machine vision methods can be more accurate and effective in mea-
suring key growth parameters (plant height and maximum crown width) of common
crops, especially leafy vegetables [16]. However, the special structure of plants and the
complex environment limit the high precision of plant phenotypic parameters through 2D
images [17]. Therefore, the 3D structure is crucial for assessing plant growth status. Fur-
thermore, studies have shown that equipment and technology for acquiring 3D phenotypes
of crop canopies are crucial for quality breeding, scientific cultivation, and fine manage-
ment [18,19]. Various devices and methods have been used to assess crop 3D information
based on the principles of binocular stereo vision, structured light, Time of Flight (ToF), and
multi-view stereo reconstruction (MVS). For example, Song et al. [20] obtained 3D point
clouds of horticultural crops and achieved surface reconstruction based on binocular stereo
vision. Hui et al. [21] reconstructed 3D plant point cloud models of cucumber, pepper, and
eggplant and calculated phenotypic parameters, such as leaf length, leaf width, leaf area,
plant height, and maximum canopy width based on multi-view stereo vision method and
laser scanning method. LiDAR is widely used to acquire crop phenotypic information due
to its high accuracy and fast scanning speed. Zheng et al. [22] also obtained 3D canopy
point cloud data of trees and estimated their leaf area index using ground-based laser
scanning. Sun et al. [23] extracted individual tree crowns and canopy width from aerial
laser scanning data. Zhang et al. [24] analyzed the dynamic phenotypic changes of trees
under wind disturbance using LiDAR.
In summary, UGVs equipped with LiDAR systems have received much attention due
to their flexibility in monitoring crop phenotypes and high accuracy in the 3D reconstruction
of crops. For example, UGVs equipped with LiDAR systems have been used to measure
crop nitrogen status [25], estimate above-ground biomass [26], and measure planting
density [27]. The commonly used orbital overhead travel phenotyping platform is limited
by its high cost and immobility. Moreover, it lacks accuracy during sensor acquisition due
to the vibration caused by the uneven ground when the UGV is moving. In this study, a
new UGV phenotype platform equipped with an electric slide rail and phenotype data
acquisition-analysis pipeline was developed using electric slide rail and LiDAR for accurate
data acquisition. Furthermore, the 3D reconstruction and growth parameter measurement
of lettuce grown in greenhouse pots were conducted. This study aimed to establish a
low-cost automated crop growth non-destructive measurement system with good accuracy
and practicality.
a low-cost automated crop growth non-destructive measurement system with good accu-
racy and practicality.

Plants 2. Methods
2023, 12, 483 3 of 18

2.1. System Overview


The UGV-LiDAR phenotyping system consists of three main parts: the hardware
2. Methods
part, the data acquisition part,
2.1. System and the data processing part (Figure 1): (1) The phenotype
Overview
data acquisition hardware equipmentphenotyping
The UGV-LiDAR contains asystemfour-wheel
consists drive
of threeself-propelled UGV,
main parts: the hardware
electric slide rail, LiDAR,
part, thereal
data time kinematic
acquisition global
part, and positioning
the data processing partsystem
(Figure(RTK-GPS), and
1): (1) The phenotype
data acquisition hardware equipment contains a four-wheel
industrial computer; (2) the data acquisition control module includes the UGV navigation drive self-propelled UGV,
electric slide rail, LiDAR, real time kinematic global positioning system (RTK-GPS), and
path setting, the electric slide
industrial rail’s moving
computer; speed
(2) the data and moving
acquisition rangeincludes
control module setting,
theLiDAR start
UGV navigation
and stop, data saving pathpath and
setting, thefile name
electric setting,
slide etc.; (3)
rail’s moving speedthe
anddata processing
moving module
range setting, LiDARin-
start
cludes the processing andofstop,
rawdata
LiDARsaving pathpoint
data, and file nameregistration,
cloud setting, etc.; (3) the data
single processing
plant module
extraction,
includes the processing of raw LiDAR data, point cloud registration, single plant extraction,
and phenotype extraction.
and phenotype extraction.

1. Composition of UGV-LiDAR phenotyping system.


Figure 1. CompositionFigure
of UGV-LiDAR phenotyping system.
2.2. UGV Platform Hardware Architecture and Control System
2.2. UGV Platform Hardware Architecture
The platform andinto
is divided Control System
five parts. The hardware structure diagram (Figure 2a,b)
and physical diagram (Figure 2c,d) are shown in Figure 2. Various parts (A, B, C, D, and E)
The platform isindivided into five parts. The hardware structure diagram (Figure 2a,b)
Figure 2b are described below:
and physical diagram (Figure
(1) Part A2c,d) are shown
represents the UGVin Figure
body 2. Various
part (size; 2195 mm parts
× 1900(A,
mm B, C, D,
× 2065 andIt is
mm).
E) in Figure 2b are described below:
the main structure of the platform and was developed using the French Dassault Systemes
(Solidworks 2019 SP5.0 software) for preliminary design. The lowest height of the chassis
(1) Part A represents the UGV body part (size; 2195 mm × 1900 mm × 2065 mm). It is
from the ground is about 1400 mm. It is mainly made of stainless steel and aluminum alloy
the main structure oftothe platform
reduce weight. and was developed
The weight of the wholeusing
machinetheisFrench
about 200Dassault Systemes
kg. The tire is made of
(Solidworks 2019 SP5.0
solid software)
rubber (outer for preliminary
diameter; 660 mmdesign.
and width; The lowest
35 mm). Theheight
narrowoftirethe chassis
width enables
the machine to move flexibly between plants. The UGV
from the ground is about 1400 mm. It is mainly made of stainless steel and aluminum is a four-wheel drive machine
with a Brushless Direct Current Motor (BLDC) and a rated power and speed of 13.5 kW
alloy to reduce weight. The weight of the whole machine is about 200 kg. The tire is made
and 3600 r/min, respectively. The steering motor contains a Brushed Direct Current motor
of solid rubber (outer diameter;
(BDC) 660power
with a rated mm andandspeed
width; 35kW
of 0.95 mm).and The narrow
1000 r/min, tire width ena-
respectively.
bles the machine to move flexibly between plants. The UGV is a four-wheel drive machine
with a Brushless Direct Current Motor (BLDC) and a rated power and speed of 13.5 kW
and 3600 r/min, respectively. The steering motor contains a Brushed Direct Current motor
(BDC) with a rated power and speed of 0.95 kW and 1000 r/min, respectively.
(2) Part B represents the electric slide rail installed at the bottom of the chassis. It is
connected to the industrial control computer (IPC) through rs485 to the USB. It can control
the movement direction, speed, and start/stop position of the slide rail through the control
(3) Part C represents the LiDAR model VLP-16 (Velodyne, CA, USA, Silicon Valley,
CA, USA) installed on the electric slide rail. This part contains 16,360-degree scan lines,
horizontal measurement angle resolution (0.1° to 0.4°), vertical measurement angle range
of 30 degrees, and angle resolution of 2°. The LiDAR is installed on the slide rail at the
height of 1 m from the ground. The IPC is connected to the LiDAR through Ethernet, and
Plants 2023, 12, 483 4 of 18
it can control the LiDAR and collect the data acquired from the LiDAR sensor.

(a) (b)

(c) (d)
Figure 2.
Figure 2. Hardware
Hardwarestructure
structureand
andphysical structure
physical of of
structure thethe
UGV. (a) (a)
UGV. Three-dimensional view;
Three-dimensional (b)
view;
decomposition diagram; (c) real view of the UGV collecting data; (d) LiDAR installation diagram.
(b) decomposition diagram; (c) real view of the UGV collecting data; (d) LiDAR installation diagram.

(4) Part
(2) Part BDrepresents
representsthe theelectric
controlslide
box and battery module
rail installed of the moving
at the bottom part ofItthe
of the chassis. is
vehicle. A to
connected symmetrical layout
the industrial with
control four-wheel
computer drive
(IPC) is used
through to control
rs485 the movement
to the USB. It can control of
the movement
the platform. An independent
direction, speed,brushless motorposition
and start/stop drives eachof thewheel. Meanwhile,
slide rail through the four inde-
control
pendent motors steer
software on the computer. the four wheels. The UGV adopts two steering schemes: four-wheel
steering withCfront
(3) Part and rear
represents thewheels
LiDAR in model
the same direction
VLP-16 and front-wheel
(Velodyne, CA, USA, steering
Silicon(Figure
Valley,
3). The
CA, USA)steering scheme
installed can
on the be switched
electric using
slide rail. Thisa remote control,
part contains depending on
16,360-degree thelines,
scan sce-
nario. The turning
horizontal radiusangle
measurement is relatively small
resolution (0.1 ◦ to 0.4
when ◦ ), vertical
using front and rear-wheel steering
measurement angle range and
can30complete
of degrees, theanddirectional
angle resolution of 2◦in
translation a narrow
. The LiDARspace. However,
is installed theslide
on the turningrail radius
at the
is larger
height of when
1 m fromusing
the front-wheel
ground. Thesteering, and the to
IPC is connected orientation
the LiDARofthrough
the vehicle can be
Ethernet, and ad-
it
justed.
can The the
control machine
LiDARisandcontrolled by an
collect the Arduino
data acquiredcontrol
from the board,
LiDARfoursensor.
absolute encoders,
and an(4) IPC,
Part which
D represents the control box
form a closed-loop andsystem.
control battery The module of the
IPC can moving
control the part of the
steering of
vehicle.
the four A symmetrical
wheels throughlayout with four-wheel
the encoder drive iscontrol
and the Arduino used toboard.
controlThe theIPCmovement
controls
of
UGVthemovement
platform. by Ancontrolling
independent the brushless
drive motors motor drives
of the four each
wheels.wheel.
RS485Meanwhile,
communication four
independent
is used among motors steer the
the encoder, four wheels.
driver, ArduinoThe UGVboard,
control adopts two
and steering
IPC. schemes:
The whole vehiclefour-
is
wheel
powered steering
by two with front and
lead-acid rear wheels
batteries in the same
and a power direction and front-wheel steering
regulator.
(Figure 3). The steering scheme can be switched using a remote control, depending on the
scenario. The turning radius is relatively small when using front and rear-wheel steering
and can complete the directional translation in a narrow space. However, the turning
radius is larger when using front-wheel steering, and the orientation of the vehicle can be
adjusted. The machine is controlled by an Arduino control board, four absolute encoders,
and an IPC, which form a closed-loop control system. The IPC can control the steering
of the four wheels through the encoder and the Arduino control board. The IPC controls
UGV movement by controlling the drive motors of the four wheels. RS485 communication
Plants 2023, 12, 483 5 of 18

Plants 2023, 12, x FOR PEER REVIEW


is used among the encoder, driver, Arduino control board, and IPC. The whole vehicle is
powered by two lead-acid batteries and a power regulator.

(a) (b)
Figure 3. Figure 3. Schematic
Schematic diagram ofdiagram of two
two steering steering(a)
schemes. schemes. (a) Four-wheel
Four-wheel co-steering; co-steering;
(b) front- (b) fr
steering.
wheel steering.

(5) Part E represents


(5) Part Ethe IPC, wireless
represents therouter, and RTK-GPS
IPC, wireless module.
router, The UGV platform
and RTK-GPS module. The U
obtains positioning
form obtains positioning information using a real-time dynamic system
information using a real-time dynamic differential positioning differential po
(RTK-GPS). A laptop computer connected to the IPC via RDP remote desktop can control
system (RTK-GPS). A laptop computer connected to the IPC via RDP remote de
the electric slide rail and LiDAR. The laptop can be connected to RTK-GPS using a USB to
control for
plan the direction thethe
electric
UGV.slide rail and
The control LiDAR. The
architecture laptop
diagram can UGV
of this be connected to RTK-G
platform is
a USB 4.to plan the direction for the UGV. The control architecture diagram of
shown in Figure
platform is shown in Figure 4.
2.3. Data Acquisition
The designed UGV could be operated manually or by FrSkyTaranisX9DPlus2019
creating tasks from measured GPS points to achieve UGV navigation on a specific path in
the field, where centimeter-level GPS information is obtained from RTKGNSS receivers.
Therefore, relevant interactive control software that could run on an IPC was developed for
the cooperative control of the LiDAR and the electric slide rail. The UGV traverses a strip
of potted crops for LiDAR data collection in a fixed-point acquisition manner, i.e., when
the UGV stops at a location, the electric slide rail starts to work, carrying the LiDAR from
one end of the slide rail to the other at a uniform speed. The speed of the slide rail was set
to 344 cm/min based on data from several experiments. A faster speed may lead to the
jittering of the slide rail, thus degrading the quality of the point cloud, while a slower speed
may increase the amount of data, leading to less efficiency and is also not conducive for
the transmission, storage, and subsequent processing of data. The LiDAR starts to record
the point cloud in .pcap file format when the slide rail starts moving and stops recording
when it reaches the end of the slide, after which the UGV starts to move to the next position
for the next round of LiDAR data acquisition. The data acquisition method is shown
in Figure 5.
(5) Part E represents the IPC, wireless router, and RTK-GPS module. The UGV plat-
form obtains positioning information using a real-time dynamic differential positioning
system (RTK-GPS). A laptop computer connected to the IPC via RDP remote desktop can
control the electric slide rail and LiDAR. The laptop can be connected to RTK-GPS using
Plants 2023, 12, 483 a USB to plan the direction for the UGV. The control architecture diagram of6 of
this
18 UGV
platform is shown in Figure 4.

FOR PEER REVIEW 6 of 18

Figure 4. UGV control system.

2.3. Data Acquisition


The designed UGV could be operated manually or by FrSkyTaranisX9DPlus2019 cre-
ating tasks from measured GPS points to achieve UGV navigation on a specific path in the
field, where centimeter-level GPS information is obtained from RTKGNSS receivers.
Therefore, relevant interactive control software that could run on an IPC was developed
for the cooperative control of the LiDAR and the electric slide rail. The UGV traverses a
strip of potted crops for LiDAR data collection in a fixed-point acquisition manner, i.e.,
when the UGV stops at a location, the electric slide rail starts to work, carrying the LiDAR
from one end of the slide rail to the other at a uniform speed. The speed of the slide rail
was set to 344 cm/min based on data from several experiments. A faster speed may lead
to the jittering of the slide rail, thus degrading the quality of the point cloud, while a
slower speed may increase the amount of data, leading to less efficiency and is also not
conducive for the transmission, storage, and subsequent processing of data. The LiDAR
starts to record the point cloud in .pcap file format when the slide rail starts moving and
stops recording when it reaches the end of the slide, after which the UGV starts to move
to the next position for the next round of LiDAR data acquisition. The data acquisition
method is shown Figure
in Figure 5. control system.
4. UGV

Figure 5. Schematic Figure


of UGV-LiDAR
5. Schematicphenotyping
of UGV-LiDARplatform for data
phenotyping acquisition.
platform for data acquisition.

2.4. Data Preprocessing


2.4. Data Preprocessing
The data acquired by LiDAR is a pcap data package, which is dynamic data composed
The data acquired by of
of 16 lines LiDAR is a pcap
laser points at 15 data
frames package, which
per second. The is dynamic
original data data com-must be
of LiDAR
posed of 16 lines of laser points
processed to get at
the15 frames
dense per second.
3D point Theplant.
cloud of the original data of LiDAR
The traditional must
point cloud stitching
be processed to getmethod was used
the dense 3Dtopoint
estimate the bit
cloud ofattitude of each
the plant. The frame acquired point
traditional by LiDAR through a
cloud
stitching method wheeled
was used odometer or laser
to estimate odometer.
the Wheelofodometer
bit attitude each framemay lead to splicing
acquired errors because
by LiDAR
the ground is uneven, and the UGV may have bumps. The robustness of slam building
through a wheeled odometer or laser odometer. Wheel odometer may lead to splicing
using the laser odometer method is very poor since the plant leaves are flexible and can be
errors because theeasily
ground is uneven,
affected and the UGV
by the environment. may
In the have bumps.
experimental The
design, therobustness
speed of theofslide rail
slam building using
was the laserThe
a mm/s. odometer
LiDAR was method
used tois very data
acquire poorat since the per
15 frames plant leaves
second. are that
Assuming
flexible and can be easily affected by the environment. In the experimental design, the
speed of the slide rail was a mm/s. The LiDAR was used to acquire data at 15 frames per
second. Assuming that each point P in the kth frame is being processed, P’s coordinates
can be translated along the vector d to obtain the following equation:
Plants 2023, 12, 483 7 of 18

each point P in the kth frame is being processed, P’s coordinates can be translated along
the vector d to obtain the following equation:

P0 = P + d (1)

where,    0  
x x αx
Plants 2023, 12, x FOR PEER REVIEW  y  0  y0  αy  7 of 18
P=
 z  , P =  z0  , d =
    
 αz  (2)
1 1 1
The transformation process from P to P’ can be expressed as P’=TP, using the following
transformation matrix T: 100α
0 10 α
T = T(α , α , α )= 1 0 0 αx α (3
001
 0 1 0 αy 
T = T αx , αy , αz = 0 0 10α0 0 1
 (3)
z

Since the LiDAR is moving in the z-axis direction 0 0 0 1 towards the sensor, Equation (3
can be expressed as follows:
Since the LiDAR is moving in the z-axis direction towards the sensor, Equation (3) can
be expressed as follows: ka
α = 0, α = 0, αka=
αx = 0, αy = 0, αy = 15
15
Each point
Each pointPPofofeach framecan
each frame canbebe derived
derived fromfrom the actual
the actual spatialspatial coordinates.
coordinates. A A
Velodyne
Velodynedata
dataprocessing softwarewas
processing software was developed
developed accordingly.
accordingly. The method
The method of uniform
of uniform
superposition was
superposition wasused
usedtoto stitch eachframe
stitch each frameintointo a dense
a dense pointpoint
cloudcloud
of UGV offixed-point
UGV fixed-poin
blocks.
blocks. TheThe Velodynedata
Velodyne data processing
processingflow is shown
flow in Figure
is shown 6.
in Figure 6.

(a) (b)
Figure 6. Velodyne
Figure 6. Velodynedata
dataprocessing flow.(a)(a)
processing flow. Velodyne
Velodyne raw raw
data.data. (b) Dense
(b) Dense pointofcloud
point cloud UGV of UGV
fixed-point block.
fixed-point block.

2.5. 2.5.
PointPoint Cloud Processing and Phenotype Estimation
Cloud Processing and Phenotype Estimation
An automated processing pipeline was developed using some library functions in
An automated
open3d. processing
Python language, Visualpipeline was
Studio 2019 developed
IDE, using some
and the automated library
processing functions in
pipeline
open3d. Python
were used language, the
to post-process Visual
blockStudio 2019 IDE,
point clouds and thecrop
and perform automated
phenotypeprocessing
estimation.pipeline
were usedall
Firstly, to block
post-process the block
point clouds acquiredpoint clouds
by UGV andcrop
in the perform cropspliced,
strip were phenotype estimation
then nor-
malized to remove the noise, and the ground was fitted. Each crop was then
Firstly, all block point clouds acquired by UGV in the crop strip were spliced, then norsegmented
using to
malized a clustering
remove thealgorithm,
noise,and
andthethephenotypic
ground wasparameters, including
fitted. Each cropplant
washeight
then and
segmented
maximum crown width, were extracted for each crop. The point cloud processing pipeline
using a clustering algorithm, and the phenotypic parameters, including plant height and
is shown in Figure 7.
maximum crown width, were extracted for each crop. The point cloud processing pipeline
is shown in Figure 7.
Plants 2023,
Plants 2023, 12,
12, 483
x FOR PEER REVIEW 88 of 18
of 18

Figure 7. Point cloud processing pipeline.

2.5.1. Point Cloud


2.5.1. Point Cloud Registration
Registration
The
The block point cloud
block point cloud needs
needs toto be
be stitched
stitched after
after preprocessing
preprocessing in in the
the order
order acquired
acquired
by
by the UGV to form a whole strip of the point cloud. In this paper, the block point
the UGV to form a whole strip of the point cloud. In this paper, the block point cloud
cloud
was
was aligned
aligned using
using the
the Iterative
Iterative Closest
Closest Point
Point (ICP)
(ICP) algorithm
algorithm [28]. However, ICP
[28]. However, ICP has
has aa
high requirement on the initial positions of the aligned point cloud and the reference
high requirement on the initial positions of the aligned point cloud and the reference point point
cloud. The algorithm
cloud. The algorithm is
is prone
prone to to local
local optimum
optimum after
after registration
registration if
if the
the initial
initial positions
positions of
of
the two point clouds are very different. Therefore, the two point clouds should be aligned
the two point clouds are very different. Therefore, the two point clouds should be aligned
Plants 2023, 12, 483 9 of 18

first before using this algorithm for registration. Each block of the data acquired by the
UGV has an equal spacing of 190 cm with an overlap of 30% between each adjacent two
blocks. Therefore, coarse registration can be completed by moving the kth block 190 (k − 1)
cm in the z-axis direction. Moreover, the ICP algorithm is based on the least squares
method, which finds the nearest neighbors based on certain constraints to calculate the
best registration parameters, i.e., the rotation matrix R and the translation vector t, which
is the minimum value of the error function. The error function E (R, t) can be expressed
as follows:
1 n
E(R, t) = ∑i=1 kqi − (Rpi + t)k2 (4)
n
where n, qi, R, and t represent the number of nearest neighbor pairs, a point in the target
point cloud P, the nearest point in the source point cloud Q corresponding to pi, the rotation
matrix, and the translation vector, respectively. The ICP algorithm is implemented as
follows:
(1) The set of points pi ∈ P in the target point cloud P is selected;
(2) The set of points qi corresponding to pi in the source point cloud Q is identified
(satisfying qi ∈ Q such that is the minimum value);
(3) The rotation matrix R and translation vector t are obtained by calculating the
relationship between the corresponding point sets such that the error function is minimized;
(4) pi is transformed, and a new set of corresponding points is obtained;
(5) The average distance d between the new pi and its corresponding point set qi is
calculated. The distance d is calculated as follows:
1 n
n ∑ i=1 i
d= kp − qi k2 (5)

(6) The calculation is stopped if d is less than the given threshold or greater than the
set number of iterations; otherwise, step (2) and the subsequent processes are re-executed
until the convergence conditions are met. Herein, the registration of the two point clouds
was simultaneously performed (Figure 7b), which were simultaneously added to the global
point cloud to obtain a complete point cloud of the crop strip. The complete point cloud of
the strip is shown in Figure 7c.

2.5.2. Noise Removal and Ground Detection


Laser scanning produces point cloud datasets with non-uniform density. In addition,
errors in the measurements can produce sparse outliers, leading to poor results. In this
study, the outliers or coarse points caused by measurement errors were removed using a
Statistical Outlier Removal as follows: a statistical analysis was performed on the neigh-
borhood of each point to calculate its average distance to all neighboring points. Points
whose mean distance is outside the standard range (defined by the global distance mean
and variance) can be defined as outliers and removed from the data if the result obtained is
a Gaussian distribution whose shape is determined by the mean and standard deviation.
The ground must be detected after noise removal and the ground points removed. In this
study, the Random sample consensus (RANSAC) algorithm was used to distinguish the
detected ground from the crop point clouds to be clustered and prevent the ground point
clouds from being mistakenly detected as crops in the subsequent clustering process. In the
algorithm design, the number of iterations of the algorithm, the distance error threshold,
and the total number of points of the point cloud were 1, ∆T1, and N, respectively. First,
three points were randomly selected to form the ground to be fitted. The 3D coordinates of
the three points were set to (X1, Y1, Z1), (X2, Y2, Z2), (X3, Y3, Z3). The fitted plane model is
shown below:
Ax + Bx + Cz + D = 0 (6)
Plants 2023, 12, 483 10 of 18

where,
A = (Y2 − Y1) (Z3 − Z1) − (Z2 − Z1) (Y3 − Y1)
B = (Z2 − Z1) (X3 − X1) − (X2 − X1) (Z3 − Z1)
C = (X2 − X1) (Y3 − Y1) − (Y2 − Y1) (X3 − X1)
D = −(AX1 + BY1 + CZ1)
The distance L from any point in space (X0, Y0, and Z0) to this plane was calculated
as follows:
|AX0 + BY0 + CZ0 + D|
L= p (7)
A2 + B2 + C2
A point was inside the model when L between a point and this hypothetical plane
was ≤∆T1. The number of interior points of the model was recorded by iterating through
N-3 points instead of the initial sampled three points. The number of interior points of the
model was obtained in the same way by randomly sampling three points and constructing a
planar model (Iterating 1 time according to this random sampling method). The probability
of producing a reasonable result increases with an increasing number of iterations. Finally,
the ground model with the highest number of interior points was selected as the best fit
based on the number of interior points of each model. The ground detection results are
shown in Figure 7d. The red points in the figure represent ground points.

2.5.3. Single Plant Division


In this study, partitioning of the crop point cloud for each plant was perfumed using
the Euclidean clustering algorithm to group points close to each other into one category.
Assuming that there are n points in point cloud C, the Euclidean distance is defined as the
closeness of two points. The distance between the nearest neighboring points is used to
achieve the point cloud clustering segmentation. The specific segmentation process is as
follows: for the preprocessed point cloud data set P, determine a query point Pi, and set the
distance threshold r; find the n nearest neighbor points Pj (j = 1,2, · · · , n) through KD-Tree;
calculate the Euclidean distance dj from the n nearest neighbor points to the query point
using Equation (8); compare the distance dj with the distance threshold r; put the points
less than r into the class M. The segmentation is completed when the number of points in
M is not increasing.
  q
n
d pi , pj = ∑k=1 (pik − pjk )2 (8)

This algorithm can only partition the single crop where there is no overlap of leaves
between two pots. Euclidean clustering cannot partition a single crop where some crops
have large leaf growth and are close to each other. In such cases, the K-means clustering
algorithm was used for partitioning. The K-means algorithm belongs to the division
clustering algorithm, where the mean value of all objects in the cluster represents the center
of each cluster. Its input is the number of clusters (K) with the data set containing n objects
(D), while the output is the set of K clusters.
The algorithm flow is shown below:
(1) Choose any K objects from D as the initial clustering centers.
(2) Assign each object to the most similar cluster based on the mean value of the objects
in the cluster.
(3) Update the cluster mean, i.e., the mean of the objects in each cluster is recalculated.
(4) Repeat steps (2) and (3) until the clusters do not change.
The K-means clustering method can determine the number of clusters and initial
cluster centers in advance. In this study, four crops could not be partitioned by Euclidean
clustering (Figure 7e), and thus K-means clustering algorithm was used for partitioning
(K = 4). The partitioning results are shown in Figure 7f.
Plants 2023, 12, 483 11 of 18

2.5.4. Plant Height Extraction


The direction of the point cloud obtained by LiDAR and the direction of the xyz
coordinate axis of the real-world coordinate system are inconsistent. Therefore, the whole
point cloud should be calibrated to the horizontal plane before extracting the height of
the plant. Furthermore, the approximate ground should be segmented by the RANSAC
algorithm before the point cloud processing pipeline for the ground plane equation to
be estimated by the segmented ground points). The rotation matrix R can then be found
using the normal vector a(a,b,c) of the ground before horizontal calibration and the vector
b(0,0,1) of the LiDAR point cloud coordinate system vertically upward. The point cloud
after horizontal calibration can be obtained by multiplying the original point cloud by the
rotation matrix R. The plant height (h) can be calculated by finding the difference between
the z value (Zmax ) of the highest point of the crop and the z value (Zmin ) of the ground
plane, then subtracting the known height (hp ) of the flower pot (Figure 7g). The plant
height calculation formula is shown below:

h = Zmax − Zmin − hp (9)

2.5.5. Maximum Crown width Extraction


The extraction of the maximum crown width of a single crop is essentially a search for
the farthest point pair in the leaf plane point cloud. The search for the farthest point pair
can be performed using geometric properties. First, the projection in the vertical direction
of the monocrop point cloud is calculated, i.e., the z-coordinate value of all points in the
monocrop point cloud is 0. This changes the monocrop point cloud from a 3D point cloud
to a 2D point cloud in the xy-plane. Subsequently, the convex polygon contours of this
2D point cloud can be extracted to obtain the convex packet. The farthest point pairs of
convex packets in a planar point cloud can be calculated using the algorithm proposed by
Shamos (1978) for calculating n-point convex packet pairs of anti-podal points (rotational
hull method) as follows:
1. Calculate the endpoints in the y-direction of the convex polygon (ymin and ymax).
2. Construct two horizontal tangents from ymin and ymax. Calculate the distance
between the pairs and maintain it as a current maximum since they are already a pair
of anti-podal points.
3. Simultaneously rotate the two lines until one coincides with one of the sides of
the polygon.
4. A new pair of anti-podal points is generated at this time. The new distance is calcu-
lated and compared with the current maximum and updated if it is greater than the
current maximum.
5. Repeat the process in steps 3 and 4 until a pair of the anti-podal points (ymin and
ymax) is produced again.
6. Output the pair of anti-podal points determining the maximum diameter.
The time complexity of this algorithm is O(n). A pair of anti-podal points with the
maximum diameter calculated using the above algorithm represents the maximum canopy
width (L) of this single crop (Figure 7h).

3. Materials
The experiments were conducted in a joint greenhouse of the Beijing Academy of Agri-
culture and Forestry (39◦ 560 N, 116◦ 160 E). The lettuce planting area measured 10 m × 40 m.
Six lettuce types (C, W, R, S, B, and L representing Crisphead lettuce, Wild lettuce, Romaine,
Stem lettuce, Butterhead lettuce, and loose-leaf lettuce) were grown in pots. Three varieties
of each type were planted, totaling 18 varieties with 3 replicates for each variety. One lettuce
plant was planted per pot. The pots were under normal water and fertilizer management.
The data were acquired on 11, 12, and 13 November 2021. We obtained 54-point cloud
data of potted plants. A commercial mobile laser 3D plant phenotyping platform Plant-
Eye F500 developed by Phenospex B.V. (Heerlen, the Netherlands), was also used (http:
fertilizer management. The data were acquired on 11, 12, and 13 November 2021.
tained 54-point cloud data of potted plants. A commercial mobile laser 3D plant
typing platform PlantEye F500 developed by Phenospex B.V. (Heerlen, the Nethe
Plants 2023, 12, 483
was also used (http://phenospex.com/products/plant-phenotyping/science-plant 12 of 18
laser-scanner/, accessed on 15 October 2021) to obtain point cloud data. The measu
principle and physical installation of this sensor are shown in Figure 8. Furtherm
//phenospex.com/products/plant-phenotyping/science-planteye-3d-laser-scanner/,
plant height and maximum crown width were extracted from the obtained ac- poin
cessed on 15 October 2021) to obtain point cloud data. The measurement principle and
data and physical
compared withof the
installation manual
this sensor measurements
are shown throughthethe
in Figure 8. Furthermore, point
plant heightcloud
and pro
and analysis pipeline.
maximum The were
crown width extraction
extracted results are shown
from the obtained in Table
point cloud 1. compared
data and The acquired
with the manual measurements through the point cloud processing and analysis pipeline.
strips were arranged according to the width of the UGV since the width of the UG
The extraction results are shown in Table 1. The acquired lettuce strips were arranged
fixed. according to the width of the UGV since the width of the UGV was fixed.

(a) (b)
Figure 8. PlantEye F500 measurement
Figure 8. PlantEye F500 measurement principle
principle andand physical
physical installation
installation diagram. (a)diagram.
The 3D laser(a) The
scan sensor. The red and blue lines represent the laser line and reflection
scan sensor. The red and blue lines represent the laser line and reflection of the of the laser after projecting
laser after pr
to the plant to be received by the cmos, respectively. (b) The sensor mounted on the overhead orbital
to the plant to be received by the cmos, respectively. (b) The sensor mounted on the overhea
system for easy movement.
system for easy movement.
Table 1. Performance and cost comparison between UGV-LiDAR phenotyping system and Plant-
Table 1. Performance
Eye F500. and cost comparison between UGV-LiDAR phenotyping system and P
F500. UGV-LiDAR Phenotyping
Parameter Name PlantEye F500
System
Parameter Name Flux UGV-LiDAR Phenotyping System
810 plants/h 1020 plants/h PlantEye F5
Cost $11,780 $147,000
Flux
Point cloud density
810 plants/h
380,000 points/plant
1020 plants
100,000 points/plant
Cost processing time
Pipelining $11,780
12,628 ms/plant 3157 ms/plant $147,000
Point cloud density 380,000 points/plant 100,000 points
4. Results
Pipelining
4.1. processing time
Point Cloud Quality 12,628 ms/plant 3157 ms/pla
The point cloud data of the potted lettuce were obtained using the UGV-LiDAR
4. Results phenotyping platform and PlantEye F500. The single point cloud of each lettuce plant was
segmented through the developed point cloud processing pipeline. The visualization of the
4.1. Point single-point
Cloud Qualitycloud of the lettuce plants obtained by the two platforms is shown in Figure 9.
Although the lettuce point cloud obtained by PlantEye F500 showed a better view
The point cloud
due to the RGB data of theapotted
information, lettuce
part of the leaf maywere obtained
be missing using
in reality. Thethe UGV-LiDA
obtained
notyping leaf
platform
point cloudand
onlyPlantEye
had the pointsF500.
of theThe
uppersingle
surface point cloud
leaves with only of each
a thin layerlettuce
and pla
did not provide information on the lower part of the obscured
segmented through the developed point cloud processing pipeline. The visualiza leaves. The point cloud of
single lettuce obtained by the designed phenotype platform contained the echo intensity
the single-point cloud
information. of the
The green lettuce
shade plants
in the figure obtained
represents byecho
the laser theintensity.
two platforms is shown
However, this
ure 9. method did not provide RGB information. Nevertheless, the platform obtained more leaf
points than the leaf points obtained by PlantEye F500, and contained the lettuce leaf points
Although the lettuce point cloud obtained by PlantEye F500 showed a bett
that are not severely obscured. The thickness of individual leaves obtained by the platform
due to the RGB information, a part of the leaf may be missing in reality. The obtain
point cloud only had the points of the upper surface leaves with only a thin layer a
not provide information on the lower part of the obscured leaves. The point cloud o
lettuce obtained by the designed phenotype platform contained the echo intensit
Plants 2023, 12, x FOR PEER REVIEW 13 of 18

points than the leaf points obtained by PlantEye F500, and contained the lettuce leaf points
Plants 2023, 12, 483 that are not severely obscured. The thickness of individual leaves obtained by the platform
13 of 18
also increased compared with the thickness obtained by PlantEye F500. PlantEye F500
uses a low-powered single-line laser with weak penetration and thus can only detect the
upper leaves. Meanwhile,
also increased compared withthethe
UGV-LiDAR phenotyping
thickness obtained platform
by PlantEye uses a more
F500. PlantEye powerful
F500 uses
a low-powered
16-line LiDAR and single-line laser with
can handle weakechoes,
multiple penetration
andand
thusthus
cancan only the
detect detect the upper
obscured leaves.
leaves. Meanwhile, the UGV-LiDAR phenotyping platform uses a more powerful 16-line
It can also detect the reflected light from the laser within a single leaf, thus increasing the
LiDAR and
thickness cansingle
of the handleleaf
multiple
pointechoes,
cloud.and thus can detect
Therefore, the obscured
the point cloud ofleaves. It can
a single also plant
lettuce
detect the reflected light from the laser within a single leaf, thus increasing the thickness
obtained by UGV-LiDAR phenotyping platform had better point cloud integrity than that
of the single leaf point cloud. Therefore, the point cloud of a single lettuce plant obtained
obtained by PlantEye
by UGV-LiDAR F500, which
phenotyping platformhas
hadsome
betteradvantages in the phenotype
point cloud integrity detection of
than that obtained
canopy 3D structure
by PlantEye outerhas
F500, which contour of crops. in the phenotype detection of canopy 3D
some advantages
structure outer contour of crops.

Figure 9. 9.
Figure Comparison
Comparisonof
ofthe
the single-point cloudofof
single-point cloud lettuce
lettuce plants
plants acquired
acquired by UGV-LiDAR
by UGV-LiDAR platform
platform
andand
PlantEye.
PlantEye.

4.2. Evaluation of Phenotypic Parameter Accuracy


4.2. Evaluation of Phenotypic Parameter Accuracy
The lettuce plant height and maximum crown width obtained by the UGV-LiDAR
The lettuce
phenotyping plant height
platform and maximum
and PlantEye F500 were crown width
extracted obtained
through by the
the lettuce UGV-LiDAR
point cloud
phenotyping platform
processing and analysisand PlantEye
pipeline. F500 were
This paper extracted
used the through
coefficient the lettuce(R2)
of determination point
andcloud
processing
root meanand
squareanalysis pipeline.
error (RMSE) to This paper
evaluate the used
degreethe coefficientwith
of agreement of determination
the manually (R2)
andmeasured
root meanvalues. Theerror
square results showed
(RMSE) tothat the accuracy
evaluate of theofpoint
the degree cloudplant
agreement withheight
the manu-
estimation
ally measured obtained
values.byThe the UGV-LiDAR
results showedphenotyping systemof
that the accuracy designed
the pointincloudplant
this study isheight
good (R2:obtained
estimation 0.97996 and RMSE:
by the 1.51 cm) (Figure
UGV-LiDAR 10). PlantEye
phenotyping systemhaddesigned
a poorer inestimation
this study is
accuracy of the point cloud plant height than UGV-LiDAR (R2: 0.93751 and RMSE: 2.54 cm).
good (R2: 0.97996 and RMSE: 1.51 cm) (Figure 10). PlantEye had a poorer estimation ac-
Furthermore, the estimation accuracy of the point cloud maximum canopy width obtained
curacy of the point cloud plant height than UGV-LiDAR (R2: 0.93751 and RMSE: 2.54 cm).
by UGV-LiDAR was poorer than that of plant height. Nevertheless, the accuracy of the
Furthermore, the estimation
point cloud maximum canopyaccuracy of the point
width estimation cloud
obtained maximum
by PlantEye canopy
(R2: 0.91798width
and ob-
tained
RMSE:by5.25
UGV-LiDAR was to
cm) was similar poorer
that ofthan that of plantphenotyping
the UGV-LiDAR height. Nevertheless,
system (R2: the accuracy
0.90975
of and
the point
RMSE:cloud maximum
4.99 cm). canopy
Correlation widthshowed
analysis estimation obtained
that both by PlantEye
systems (R2: 0.91798
could accurately
and RMSE: 5.25 cm) was similar to that of the UGV-LiDAR phenotyping system (R2:
Plants 2023, 12, x FOR PEER REVIEW 14 of 18

Plants 2023, 12, 483 14 of 18

0.90975 and RMSE: 4.99 cm). Correlation analysis showed that both systems could accu-
rately
measuremeasure the plant
the plant heightheight and maximum
and maximum canopy
canopy width
width ofvegetables.
of the the vegetables. However,
However, the
the developed UGV-LiDAR phenotyping system could estimate plant height
developed UGV-LiDAR phenotyping system could estimate plant height and maximum and maxi-
mum
crowncrown
widthwidth more accurately
more accurately than
than the the PlantyEye
PlantyEye F500 system.
F500 system.

(a) (b)

(c) (d)
Figure
Figure 10.
10. Comparison
Comparison of of plant
plant height
height and
and maximum
maximum crown
crown width
width extracted
extracted using
using the
the point
point cloud
cloud
processing pipeline. (a) Linear fit of plant height estimated by point cloud obtained by UGV-LiDAR
processing pipeline. (a) Linear fit of plant height estimated by point cloud obtained by UGV-LiDAR
phenotyping system against manual measurements. (b) Linear fit of maximum crown width esti-
phenotyping system against manual measurements. (b) Linear fit of maximum crown width esti-
mated by point cloud obtained by UGV-LiDAR phenotyping system against manual measurements.
mated
(c) byheight
Plant point cloud obtained
estimated by UGV-LiDAR
by point phenotyping
cloud obtained system
by PlantEye against
against manual
manual measurements.
measurements. (d)
(c) Plant height estimated by point cloud obtained by PlantEye against manual measurements.
Linear fit of the maximum canopy width estimated by the point cloud obtained by PlantEye against
(d) Linear
manual fit of the maximum canopy width estimated by the point cloud obtained by PlantEye
measurements.
against manual measurements.
4.3. Performance and Cost
4.3. Performance and Cost
PlantEye F500 is a single-line laser mounted on the orbital overhead travel pheno-
PlantEye F500 is a single-line laser mounted on the orbital overhead travel phenotyp-
typing platform and was established in the greenhouse of Beijing Academy of Agriculture
ing platform and was established in the greenhouse of Beijing Academy of Agriculture and
and Forestry. The moving speed of the orbit was set to 300 cm/min in actual use. The orbit
Forestry. The moving speed of the orbit was set to 300 cm/min in actual use. The orbit
can acquire about 1020 plants per hour without stopping. Although the moving speed of
can acquire about 1020 plants per hour without stopping. Although the moving speed
Velodyne VLP-16 LiDAR on the vehicle track was faster (344 cm/min), it had a slower
of Velodyne VLP-16 LiDAR on the vehicle track was faster (344 cm/min), it had a slower
acquisition speed (810 plants per hour) than PlantEye F500 because the vehicle requires
acquisition speed (810 plants per hour) than PlantEye F500 because the vehicle requires
time to move. The PlantEye F500 costs about $147,000, while the UGV-LiDAR phenotyp-
time to move. The PlantEye F500 costs about $147,000, while the UGV-LiDAR phenotyp-
ing platform costs only $11,780, which is significantly lower. The point cloud pipelining
ing platform costs only $11,780, which is significantly lower. The point cloud pipelining
process
process is run on
is run on aa desktop
desktop workstation
workstation (Intel
(Intel Core
Core i7
i7 processor,
processor, 2.9 GHz CPU,
2.9 GHz CPU, 32 GB
32 GB
RAM, Windows 11 OS). Moreover, the point cloud of lettuce acquired by the UGV-LiDAR
RAM, Windows 11 OS). Moreover, the point cloud of lettuce acquired by the UGV-LiDAR
phenotyping systemwas
phenotyping system wasmore
moredense,
dense,with
withanan average
average of about
of about 380,000
380,000 points
points per plant
per plant and
and a processing time of 12,628 ms, while the point cloud acquired by PlantEye F500
a processing time of 12,628 ms, while the point cloud acquired by PlantEye F500 was sparse, was
Plants 2023, 12, x FOR PEER REVIEW 15 of 18

Plants 2023, 12, 483 15 of 18

sparse, with an average of about 100,000 points per plant and a processing time of 3157
ms. This
with indicates
an average that the
of about UGV-LiDAR
100,000 phenotyping
points per plant and asystem was time
processing less efficient in post-
of 3157 ms. This
processing than PlantEye F500. A comparison of performance and cost is shown
indicates that the UGV-LiDAR phenotyping system was less efficient in post-processingin Table
1. PlantEye F500. A comparison of performance and cost is shown in Table 1.
than

4.4. Morphological Differences


4.4. Differences between
between Different
Different Categories
Categories of of Lettuce
Lettuce
The lettuce
The lettucephenotypic
phenotypicparameters
parameters obtained
obtained fromfrom
the the UGV-LiDAR
UGV-LiDAR phenotyping
phenotyping plat-
platform can be used to determine the differences in morphological traits
form can be used to determine the differences in morphological traits among various lettuce among various
lettuceHerein,
types. types. Herein,
the mean theand
mean and variance
variance of lettuce
of lettuce plant height
plant height and maximum
and maximum crown crown
width
width obtained
obtained from the from the UGV-LiDAR
UGV-LiDAR phenotyping
phenotyping platformplatform were calculated
were calculated using theusing the
software
software
SPSS 25.0.SPSS 25.0. A analysis
A statistical statisticalofanalysis
significantof significant
differences differences was also (Figure
was also performed performed 11).
(Figure
At least 11).
one At leastvariety
lettuce one lettuce variety had significantly
had significantly different plant different
heightplant height andcrown
and maximum maxi-
mum crown
width from the width
otherfrom the other
varieties varieties (Kruskal–Wallis
(Kruskal–Wallis U test, p < 0.05).
test and Mann–Whitney
test and Mann–Whitney U
The Stem lettuce had the highest mean plant height and the largest mean
test, p < 0.05). The Stem lettuce had the highest mean plant height and the largest mean maximum crown
width,
maximum whilecrown
Butterhead
width,lettuce
whilehad the shortest
Butterhead and smallest
lettuce had themean maximum
shortest crown width.
and smallest mean
The plant height
maximum crownofwidth.
Stem lettuce
The plantwasheight
significantly
of Stemdifferent
lettuce wasfrom that of Crisphead
significantly different lettuce,
from
Butterhead Lettucelettuce,
that of Crisphead (p < 0.01), and Loose-leaf
Butterhead Lettuce (p lettuce (p and
< 0.01), < 0.05). Furthermore,
Loose-leaf lettuce (p the< plant
0.05).
height of Butterhead
Furthermore, the plantlettuce
heightwas ofsignificantly different
Butterhead lettuce wasfrom that of Wild
significantly lettuce from
different (p < 0.01)
that
and Romaine (p < 0.05). The maximum crown width of Stem lettuce
of Wild lettuce (p < 0.01) and Romaine (p < 0.05). The maximum crown width of Stem was significantly dif-
ferent from that of Butterhead lettuce (p < 0.01) and Crisphead lettuce (p
lettuce was significantly different from that of Butterhead lettuce (p < 0.01) and Crisphead < 0.05). Moreover,
the maximum
lettuce (p < 0.05).crown widththe
Moreover, of Butterhead
maximum crown Lettuce was of
width significantly
Butterhead different
Lettuce was from that
signif-
of Wilddifferent
icantly lettuce and fromRomaine (p < 0.05).
that of Wild lettuceInand
conclusion,
Romaine the (p < surfaces
0.05). In of the UGV-LiDAR
conclusion, the sur-
phenotyping platform and phenotyping
faces of the UGV-LiDAR 3D point cloudplatform
resolutionand pipeline
3D point are sensitive enough to
cloud resolution detect
pipeline
subtle differences between different lettuce types.
are sensitive enough to detect subtle differences between different lettuce types.

Figure 11.
Figure 11. Analysis
Analysis of
of plant
plant height
height and
and maximum
maximum crown
crown width
width differences.
differences. The
The central
central horizontal
horizontal
line indicates the median. The top and bottom of the box indicate the 25th and 75th percentile, re-
line indicates the median. The top and bottom of the box indicate the 25th and 75th percentile,
spectively. The upper and lower solid dots indicate outliers beyond the upper and lower quartiles,
respectively. The upper and lower solid dots indicate outliers beyond the upper and lower quartiles,
respectively; whiskers extend to the extreme non-outliers. The hollow dots represent the mean. Dif-
respectively; whiskersstatistically
ferent letters indicate extend to significant
the extreme non-outliers.
differences The species
between hollow (p
dots represent the mean.
< 0.05).
Different letters indicate statistically significant differences between species (p < 0.05).
5. Discussion
5. Discussion
The comparison
The comparison results
resultsshowed
showedthat thatthe phenotypic
the phenotypic platform
platformandand
phenotypic
phenotypicparam-
pa-
eter extraction pipeline could reliably measure the plant height and
rameter extraction pipeline could reliably measure the plant height and maximum crown maximum crown
width of
width of greenhouse-potted
greenhouse-potted crops
crops using
using thethe synergistic
synergistic operation
operation of
of LiDAR
LiDAR andand track,
track,
thus helping breeders to easily observe and screen good traits in many samples.
thus helping breeders to easily observe and screen good traits in many samples. PlantEye PlantEye
F500 is
F500 is aawell-established
well-establishedcommercial
commercialplant plant3D3Dscanner
scanner used
used forfor automatic
automatic andand continu-
continuous
ous observation of plant growth status [26]. Compared with PlantEye
observation of plant growth status [26]. Compared with PlantEye F500, the UGV had F500, the UGV had
higher estimation accuracy for plant height and lower estimation accuracy
higher estimation accuracy for plant height and lower estimation accuracy for maximum for maximum
crown width.
crown width.Moreover,
Moreover,UGVUGV waswaslessless costly
costly compared
compared withwith
otherother phenotyping
phenotyping plat-
platforms,
forms,
such assuch as the suspension
the suspension phenotyping
phenotyping platformplatform [11], orbital
[11], orbital overheadoverhead
travel travel pheno-
phenotyping
typing platforms
platforms [8], and
[8], and other other immovable
immovable on-site phenotypic
on-site phenotypic platforms.platforms. Furthermore,
Furthermore, these plat-
forms are difficult to disassemble and install to other plots after construction is completed,
Plants 2023, 12, 483 16 of 18

while UGV and UAV platforms can be used in any plot due to their high flexibility [29].
However, the turbulence caused by the rotor blades of UAVs in low flight may significantly
affect the plant canopy structure, leading to a large error when measuring phenological
parameters. Moreover, the resolution obtained by the sensors is usually very low when
the UAVs reach a height where the airflow does not cause disturbance to the plants. The
resolution of the images or point clouds obtained by UGV may be higher than that obtained
by the UAV since the sensors on the UGV phenotyping platform are closer to the top of the
plant canopy.
However, UGVs have some disadvantages. First, the quality of the ground soil limits
UGV movement. For example, wet soil will make the UGV stuck into the mud, leading to
the compaction of the soil and damage to plants. The traditional UGV phenotype platform
continuously moves while collecting data. The movement bumps can affect point cloud
acquisition. Therefore, the LiDAR frames should be stitched to obtain the complete plant
canopy 3D morphology for a high-density point cloud. The jitter of the vehicle may also
lead to difficulties in using wheel odometry, laser odometry, and other ways of SLAM
build map. If the LiDAR frames are not stitched, a high line number of LiDAR would be
needed to obtain a dense point cloud [30]. However, the increased number of LiDAR lines
increases the cost. Furthermore, the maximum canopy estimation accuracy may not be as
high as the plant height estimation accuracy due to the difficulty of manually measuring
the maximum canopy width. Unlike the UGV, which causes contact interference with
larger plants during movement, the new UGV-LiDAR phenotyping platform has a small
electric slide rail for efficient data acquisition while the plants are stationary, leading to
high accuracy. Meanwhile, the LiDAR sensors are closer to the plants, increasing accuracy.
The small slide rail also increases the running accuracy compared with the track of the
orbital overhead travel phenotyping platform.
However, the proposed UGV-LiDAR phenotyping system has some limitations and
unfinished parts: (1) the point cloud data acquired by the UGV-LiDAR phenotyping system
has a large amount of redundancy, leading to inefficiency in post-processing. A denser point
cloud is unsuitable in acquiring these phenotypic parameters of plant height and maximum
crown width since it represents more noise, which affects the extraction of phenotypic data.
(2) Subsequent studies should optimize the UGV-LiDAR phenotyping system using faster
and more accurate vehicle-mounted motorized slide rail and replacing LiDAR with higher
accuracy and lower line count. Further studies should increase resolvable phenotypic
parameters, such as leaf number, width, inclination, area, etc. Besides, deep learning
should be used to improve the speed and accuracy of lettuce single plant identification.
(3) Future studies should design UGV with adjustable height and span for application to
more plants, more scenes, and the acquisition of phenotypic information of plants during
different growth and development periods. (4) Finally, various sensors, such as RGB
camera, thermal infrared camera, and multispectral camera should be used in the future to
monitor more phenotypic information of plants.

6. Conclusions
In this paper, a new UGV phenotype platform equipped with an electric slide rail and
phenotype data acquisition-analysis pipeline was proposed to avoid the effect of movement
bumps on the quality of point cloud acquisition. The platform was developed using 16-line
LiDAR, electric slide rail, and UGV via RTK-GPS for automatic movement to obtain fine
point cloud data. The 3D structure of the lettuce canopy was obtained by the homogeneous
overlay frame method based on uniform speed superposition frames to lettuce. This
method has a cost advantage compared with the traditional UGV high line count LiDAR
point cloud acquisition system. The point cloud was matched and fused by the iterative
nearest point (ICP) algorithm through pipelining to complete the 3D reconstruction of a
whole strip point cloud. Random Sampling Consistency (RANSAC) algorithm, Euclidean
clustering, and k-means clustering algorithm were used to obtain a single lettuce canopy 3D
point cloud. The plant height and maximum crown width were also accurately estimated.
Plants 2023, 12, 483 17 of 18

The new UGV phenotype platform can be used to accurately measure plant height and
maximum crown width with high accuracy and at a reduced cost compared with PlantEye
F500. Therefore, the platform can be used to measure other plant 3D phenotype data after
further expansion of the algorithm. The UGV platform can also be installed with other
sensors to achieve more dimensional phenotype information monitoring.

Author Contributions: Conceptualization, X.G. and J.F.; methodology, S.C., W.W. and J.F.; software,
S.C. and W.G.; validation, S.C.; resources, X.G.; data curation, S.C. and X.L.; writing—original draft
preparation, S.C., W.W., and J.F.; writing—review and editing, S.C. and X.G.; visualization, S.C.;
supervision, X.G.; funding acquisition, X.G. All authors have read and agreed to the published
version of the manuscript.
Funding: This research was funded by the National Key R&D Program (2022YFD2002305), Con-
struction of Beijing Nova Program (Z211100002121065), Collaborative Innovation Center of Beijing
Academy of Agricultural and Forestry Sciences (KJCX201917), and Science and Technology Innova-
tion Special Construction Funded Program of Beijing Academy of Agriculture and Forestry Sciences
(KJCX20210413).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors would like to thank the editor and the anonymous reviewers for
their valuable suggestions to improve the quality of this paper.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Dhondt, S.; Wuyts, N.; Inze, D. Cell to whole-plant phenotyping: The best is yet to come. Trends Plant Sci. 2013, 18, 433–444.
[CrossRef]
2. Watt, M.; Fiorani, F.; Usadel, B.; Rascher, U.; Muller, O.; Schurr, U. Phenotyping: New Windows into the Plant for Breeders. Annu.
Rev. Plant Biol. 2020, 71, 689–712. [CrossRef]
3. Grosskinsky, D.K.; Svensgaard, J.; Christensen, S.; Roitsch, T. Plant phenomics and the need for physiological phenotyping across
scales to narrow the genotype-to-phenotype knowledge gap. J. Exp. Bot. 2015, 66, 5429–5440. [CrossRef]
4. Kim, S.L.; Solehati, N.; Choi, I.C.; Kim, K.H.; Kwon, T.R. Data Management for Plant Phenomics. J. Plant Biol. 2017, 60, 285–297.
[CrossRef]
5. Wu, S.; Wen, W.L.; Wang, Y.J.; Fan, J.C.; Wang, C.Y.; Gou, W.B.; Guo, X.Y. MVS-Pheno: A Portable and Low-Cost Phenotyping
Platform for Maize Shoots Using Multiview Stereo 3D Reconstruction. Plant Phenomics 2020, 2020, 1848437. [CrossRef]
6. Fiorani, F.; Schurr, U. Future Scenarios for Plant Phenotyping. Annu. Rev. Plant Biol. 2013, 64, 267–291. [CrossRef]
7. Tester, M.; Langridge, P. Breeding Technologies to Increase Crop Production in a Changing World. Science 2010, 327, 818–822.
[CrossRef]
8. Virlet, N.; Sabermanesh, K.; Sadeghi-Tehran, P.; Hawkesford, M.J. Field Scanalyzer: An automated robotic field phenotyping
platform for detailed crop monitoring. Funct. Plant Biol. 2017, 44, 143–153. [CrossRef]
9. Shafiekhani, A.; Kadam, S.; Fritschi, F.B.; DeSouza, G.N. Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput
Field Phenotyping. Sensors 2017, 17, 214. [CrossRef]
10. Sun, S.P.; Li, C.Y.; Paterson, A.H.; Jiang, Y.; Xu, R.; Robertson, J.S.; Snider, J.L.; Chee, P.W. In-field High Throughput Phenotyping
and Cotton Plant Growth Analysis Using LiDAR. Front. Plant Sci. 2018, 9, 16. [CrossRef]
11. Kirchgessner, N.; Liebisch, F.; Yu, K.; Pfeifer, J.; Friedli, M.; Hund, A.; Walter, A. The ETH field phenotyping platform FIP: A
cable-suspended multi sensor system. Funct. Plant Biol. 2017, 44, 154–168. [CrossRef]
12. Yang, G.J.; Liu, J.G.; Zhao, C.J.; Li, Z.H.; Huang, Y.B.; Yu, H.Y.; Xu, B.; Yang, X.D.; Zhu, D.M.; Zhang, X.Y.; et al. Unmanned
Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives. Front. Plant Sci. 2017, 8, 1111.
[CrossRef]
13. Roitsch, T.; Cabrera-Bosquet, L.; Fournier, A.; Ghamkhar, K.; Jimenez-Berni, J.; Pinto, F.; Ober, E.S. Review: New sensors and
data-driven approaches—A path to next generation phenomics. Plant Sci. 2019, 282, 2–10. [CrossRef]
14. Zhu, J.Q.; van der Werf, W.; Anten, N.P.R.; Vos, J.; Evers, J.B. The contribution of phenotypic plasticity to complementary light
capture in plant mixtures. New Phytol. 2015, 207, 1213–1222. [CrossRef] [PubMed]
15. Sadras, V.O.; Slafer, G.A. Environmental modulation of yield components in cereals: Heritabilities reveal a hierarchy of phenotypic
plasticities. Field Crops Res. 2012, 127, 215–224. [CrossRef]
Plants 2023, 12, 483 18 of 18

16. Lati, R.N.; Filin, S.; Eizenberg, H. Estimation of Plants’ Growth Parameters via Image-Based Reconstruction of Their Three-
Dimensional Shape. Agron. J. 2013, 105, 191–198. [CrossRef]
17. Zhao, C.J.; Zhang, Y.; Du, J.J.; Guo, X.Y.; Wen, W.L.; Gu, S.H.; Wang, J.L.; Fan, J.C. Crop Phenomics: Current Status and
Perspectives. Front. Plant Sci. 2019, 10, 714. [CrossRef]
18. McCarthy, C.L.; Hancock, N.H.; Raine, S.R. Applied machine vision of plants: A review with implications for field deployment in
automated farming operations. Intell. Serv. Robot. 2010, 3, 209–217. [CrossRef]
19. Jin, S.C.; Su, Y.J.; Wu, F.F.; Pang, S.X.; Gao, S.; Hu, T.Y.; Liu, J.; Guo, Q.H. Stem-Leaf Segmentation and Phenotypic Trait Extraction
of Individual Maize Using Terrestrial LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1336–1346. [CrossRef]
20. Song, Y.; Wilson, R.; Edmondson, R.; Parsons, N. Surface modelling of plants from stereo images. In Proceedings of the Sixth
International Conference on 3-D Digital Imaging and Modeling (3DIM 2007), Montreal, QC, Canada, 21–23 August 2007; pp. 312–319.
21. Hui, F.; Zhu, J.Y.; Hu, P.C.; Meng, L.; Zhu, B.L.; Guo, Y.; Li, B.G.; Ma, Y.T. Image-based dynamic quantification and high-accuracy
3D evaluation of canopy structure of plant populations. Ann. Bot. 2018, 121, 1079–1088. [CrossRef]
22. Zheng, G.; Moskal, L.M. Computational-Geometry-Based Retrieval of Effective Leaf Area Index Using Terrestrial Laser Scanning.
IEEE Trans. Geosci. Remote Sens. 2012, 50, 3958–3969. [CrossRef]
23. Sun, C.X.; Huang, C.W.; Zhang, H.Q.; Chen, B.Q.; An, F.; Wang, L.W.; Yun, T. Individual Tree Crown Segmentation and Crown
Width Extraction From a Heightmap Derived From Aerial Laser Scanning Data Using a Deep Learning Framework. Front. Plant
Sci. 2022, 13, 914974. [CrossRef] [PubMed]
24. Zhang, B.; Wang, X.J.; Yuan, X.Y.; An, F.; Zhang, H.Q.; Zhou, L.J.; Shi, J.G.; Yun, T. Simulating Wind Disturbances over Rubber
Trees with Phenotypic Trait Analysis Using Terrestrial Laser Scanning. Forests 2022, 13, 1298. [CrossRef]
25. Eitel, J.U.H.; Vierling, L.A.; Long, D.S.; Hunt, E.R. Early season remote sensing of wheat nitrogen status using a green scanning
laser. Agric. For. Meteorol. 2011, 151, 1338–1345. [CrossRef]
26. Nguyen, P.; Badenhorst, P.E.; Shi, F.; Spangenberg, G.C.; Smith, K.F.; Daetwyler, H.D. Design of an Unmanned Ground Vehicle and
LiDAR Pipeline for the High-Throughput Phenotyping of Biomass in Perennial Ryegrass. Remote Sens. 2021, 13, 20. [CrossRef]
27. Sanz, R.; Rosell, J.R.; Llorens, J.; Gil, E.; Planas, S. Relationship between tree row LIDAR-volume and leaf area density for fruit
orchards and vineyards obtained with a LIDAR 3D Dynamic Measurement System. Agric. For. Meteorol. 2013, 171, 153–162.
[CrossRef]
28. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256.
[CrossRef]
29. Deery, D.; Jimenez-Berni, J.; Jones, H.; Sirault, X.; Furbank, R. Proximal Remote Sensing Buggies and Potential Applications for
Field-Based Phenotyping. Agronomy 2014, 4, 349–379. [CrossRef]
30. Qiu, Q.; Sun, N.; Bai, H.; Wang, N.; Fan, Z.Q.; Wang, Y.J.; Meng, Z.J.; Li, B.; Cong, Y. Field-Based High-Throughput Phenotyping
for Maize Plant Using 3D LiDAR Point Cloud Generated with a “Phenomobile”. Front. Plant Sci. 2019, 10, 554. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like