0% found this document useful (0 votes)
43 views27 pages

3D Plant Canopy Measurement Techniques

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views27 pages

3D Plant Canopy Measurement Techniques

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

agriculture

Review
Research Status and Prospects on Plant Canopy
Structure Measurement Using Visual Sensors Based
on Three-Dimensional Reconstruction
Jizhang Wang 1, * , Yun Zhang 2 and Rongrong Gu 3
1 Key Laboratory of Modern Agricultural Equipment and Technology, Ministry of Education and Jiangsu
Province, School of Agricultural Engineering, Jiangsu University, Zhenjiang 212013, China
2 Institute of Field Management Equipment, School of Agricultural Engineering, Jiangsu University,
Zhenjiang 212013, China; 2221816015@stmail.ujs.edu.cn
3 Shanghai Research Institute for Intelligent Autonomous Systems, School of Electronics and Information
Engineering, Tongji University, Shanghai 201804, China; rongronggu@tongji.edu.cn
* Correspondence: whxh@ujs.edu.cn; Tel.: +86-139-2158-7906

Received: 23 August 2020; Accepted: 5 October 2020; Published: 8 October 2020 

Abstract: Three-dimensional (3D) plant canopy structure analysis is an important part of plant
phenotype studies. To promote the development of plant canopy structure measurement based on
3D reconstruction, we reviewed the latest research progress achieved using visual sensors to measure
the 3D plant canopy structure from four aspects, including the principles of 3D plant measurement
technologies, the corresponding instruments and specifications of different visual sensors, the methods
of plant canopy structure extraction based on 3D reconstruction, and the conclusion and promise
of plant canopy measurement technology. In the current research phase on 3D structural plant
canopy measurement techniques, the leading algorithms of every step for plant canopy structure
measurement based on 3D reconstruction are introduced. Finally, future prospects for a standard
phenotypical analytical method, rapid reconstruction, and precision optimization are described.

Keywords: 3D measurement; 3D reconstruction; plant phenotype; canopy structure;


point cloud processing

1. Introduction
With the rapid development of plant phenotypical technology, its identification has become a key
process used to improve plant yield, and analyzing plant phenotypes with intelligent equipment is one
of the main methods to achieve smart agriculture [1]. Digital and visual research of three-dimensional
(3D) plant canopy structures is an important part of plant phenotypical studies. With the improvement
in computer processing capabilities and reductions in the size of 3D data measurement devices, 3D plant
canopy structure measurement and reconstruction studies have begun to increase exponentially [2].
This paper introduces five common visual techniques for 3D plant canopy data measurement,
their corresponding instrument models and parameters, and their advantages and disadvantages.
These technologies are binocular stereo vision, multi-view vision, time of flight (ToF), light detection
and ranging (LiDAR), and structured light. Following this, the general process of 3D reconstruction
and structure index extraction of plant canopies are summarized. The accuracy and correlation of the
structure index of the reconstructed plant canopy with different visual devices are evaluated, and the
common algorithms of plant 3D point cloud processing are reviewed. Then, the technical defects,
including the lack of matching between reconstructed 3D plant structure data and physiological data,
the low reconstruction accuracy, and the high device costs, are outlined. Finally, the development
trends in 3D plant canopy reconstruction technology and structure measurement are described.

Agriculture 2020, 10, 462; doi:10.3390/agriculture10100462 www.mdpi.com/journal/agriculture


Agriculture 2020, 10, x FOR PEER REVIEW 2 of 26

including the lack of matching between reconstructed 3D plant structure data and physiological data,
the low reconstruction accuracy, and the high device costs, are outlined. Finally, the development
Agriculture 2020, 10, 462 2 of 27
trends in 3D plant canopy reconstruction technology and structure measurement are described.

2. 3D
2. 3D Plant
Plant Canopy
Canopy Data
Data Measurement
Measurement Technology
Technology

2.1. Binocular
2.1. Binocular Stereo
Stereo Vision
Vision Technology
Technologyand
andEquipment
Equipment
Binocular vision
Binocular vision uses
uses two
two cameras
cameras toto image
image the
the same
same object
object at
at different
different positions,
positions, which
which will
will
produce aa difference
produce difference inin the
thecoordinates
coordinates of
of similar
similar features
features within
within two
two stereo
stereo images,
images, the
the difference
difference
calls binocular
calls binocular disparity,
disparity,and
andthe
thedistance
distance(object
(objectto
tocamera)
camera)can
can be
be calculated
calculated according
according toto binocular
binocular
disparity.Disparity
disparity. Disparitydistance
distancemeasurement
measurementisisapplied
appliedtotocalculate
calculatedepth
depthinformation
information[2].
[2]. The
The principle
principle
of the
of the method
method isisshown
shownin inFigure
Figure1.1.

Figure 1. Binocular stereo vision principle. x1 and x2 is value of image coordinate and can be obtained
Figure 1. Binocular stereo vision principle. x1 and x2 is value of image coordinate and can be obtained
from image plane directly, and camera calibration can get f (focal distance) and b (baseline). The z (deep
from image plane directly, and camera calibration can get f (focal distance) and f ∗b b (baseline). The z
of object) can be calculated by triangle similarity principle, which result as z = x1−x2 , and x and y can
f *b
(deep
be of object)
calculated by can beimage
z and calculated
planeby triangle similarity principle, which result as z =
coordinate. , and x
x1 − x 2
and main
The y can be calculated
process by z and image
of binocular visionplane coordinate.includes image collection, camera calibration,
reconstruction
feature extraction, stereo matching, and 3D reconstruction. Camera calibration is a key step for
The main process of binocular vision reconstruction includes image collection, camera
obtaining stereo vision data with binocular cameras, and its main purpose is to estimate the parameters
calibration, feature extraction, stereo matching, and 3D reconstruction. Camera calibration is a key
of a lens and image sensor of a camera, and use these parameters to measure the size of an object in
step for obtaining stereo vision data with binocular cameras, and its main purpose is to estimate the
world units or determine the relative location between camera and object. The main camera calibration
parameters of a lens and image sensor of a camera, and use these parameters to measure the size of
methods include the Tsai method [3], Faugeras–Toscani method [4], Martins’ two-plane method [5],
an object in world units or determine the relative location between camera and object. The main
Pollastri method [6], Caprile–Torre method [7], and Zhang Zhengyou’s method [8]. These methods
camera calibration methods include the Tsai method [3], Faugeras–Toscani method [4], Martins' two-
are based on traditional calibration methods that obtain the camera parameters by using a highly
plane method [5], Pollastri method [6], Caprile–Torre method [7], and Zhang Zhengyou’s method [8].
accurate calibration piece to establish the correspondence between the space points and the image
These methods are based on traditional calibration methods that obtain the camera parameters by
points. In addition, there are self-calibration technologies and calibration techniques based on active
using a highly accurate calibration piece to establish the correspondence between the space points
vision [9]. Andersen et al. [10] used the camera calibration method of Zhang Zhengyou to calibrate
and the image points. In addition, there are self-calibration technologies and calibration techniques
the internal parameters of the binocular camera, and then obtained the depth data of wheat using the
based on active vision [9]. Andersen et al. [10] used the camera calibration method of Zhang
stereo matching method with simulated annealing.
Zhengyou to calibrate the internal parameters of the binocular camera, and then obtained the depth
Stereo matching or disparity estimation is the process of finding the pixels in the multi-view
data of wheat using the stereo matching method with simulated annealing.
that correspond to the same 3D point in the scene. The disparity map refers to the apparent pixel
Stereo matching or disparity estimation is the process of finding the pixels in the multi-view that
difference or motion between a pair of stereo images. The calculation of disparity maps in stereo
correspond to the same 3D point in the scene. The disparity map refers to the apparent pixel
matching is both challenging and the most important part of binocular stereo vision technology.
difference or motion between a pair of stereo images. The calculation of disparity maps in stereo
Various algorithms can be used to calculate pixel disparity, which can be divided into global, local, and
iterative methods according to different optimization theories; they can also be divided into region
matching, feature matching, and phase matching by what elements are represented by the images.
Agriculture 2020, 10, 462 3 of 27

Malekabadi [11] used an algorithm based on local methods (ABLM) and an algorithm based on global
methods (ABGM) to obtain the disparity image, which can provide plant shape data. Two stereo
matchings, 3D minimum spanning tree (3DMST) [12] and semi-global block matching (SGBM), are
state-of-the-art and widely used. Bao [13] designed an analysis system to measure plant height in
field using a high-throughput field combined with the 3DMST stereo matching technique. Baweja [14]
coupled deep convolutional neural networks and SGBM stereo matching to count stalks and measure
stalk width. Dandrifosse [15] used SGBM stereo matching to extract wheat structure features with
two nadir cameras in field conditions, including height, leaf area, and leaf angles; the result showed
that 3D point cloud produced by the stereo camera can be used to measure the plant height and other
morphological characteristics, although some errors were noted.
The parameters of typical binocular cameras are shown in Table 1. Binocular stereo version is
simple and inexpensive, and no further auxiliary equipment (such as a specific light source) and no
special projection are required [16]. Stereo vision technology also has limitations. It is affected by
changes in scene lighting and requires a highly-configured computing system to implement the stereo
matching algorithm; The measurement accuracy by binocular stereo depends on the baseline length,
as the longer the baseline length compared with distance to a measurement object is, the higher the
accuracy is; Stereo vision cannot acquire high-quality data, but it uses the data to have an interpretation
in robotics and computer vision [2]; A robust disparity estimation is difficult in areas of homogeneous
color or occlusion [16]; and a stereo camera may not reflect the actual boundary of the surface when
projecting on a smooth and curved surface, which is called false boundary problem and will affect
the correctness of feature matching in active stereo vision. To solve the false boundary problem, one
effective approach is to use dynamic and exploratory sensing, another is to move the cameras farther
away from the surface [17].

Table 1. Typical binocular stereo camera.

Camera Bumblebee2-03S2 Zed2 PM802(PERCIPIO)


4416 × 1242, 15 fps 2560 × 1920, 1 fps
RGB resolution and 648 × 488, 48 fps
3480 × 1080, 30 fps 1280 × 960, 1 fps
frame rate 1024 × 768, 18 fps
2560 × 720, 60 fps 640 × 480, 1 fps
Depth resolution and 1280 × 1920, 1 fps
648 × 488, 48 fps 2560 × 720, 15 fps (Ultra mode)
frame rate 640 × 480, 1 fps
Baseline 120 mm 120 mm 450 mm
Focal length 2.5 mm 2.12 mm N.A.
Size (mm) 157 × 36 × 47.4 175 × 30 × 33 538.4 × 85.5 × 89.6
Weight (g) 342 135 2000
Measurable range (m) N.A. 0.5–20 0.85–4.2
Field of view (vertical ×
66◦ × 43◦ 110◦ × 70◦ 56◦ × 46◦
horizontal)
<1% up to 3 m
Accuracy N.A. 0.04–1%
<5% up to 15 m
1. Inertial Measurement
1. Protection: IP54
Unit (IMU)
Special or limitations Extendable 2. Applying for industry
2. Depending on
equipment
high-performance equipment
Price ($) 116 449 11766
N.A. indicates that data were not found; RGB is the abbreviation of red, blue and green.

2.2. Multi-View Vision Technology


Multi-view vision technology is an imaging method used to capture pictures of objects from
different perspectives with calibrated cameras. The feature points obtained by overlapped images are
used to calculate shooting position. Its main applications include structure-from-motion technology
(SfM) and multi-view stereo technology (MVS). There are two main multi-view vision technologies:
using multiple cameras to obtain 3D data and rotating cameras or objects to obtain 3D data (including
Agriculture 2020, 10, 462 4 of 27

deep information). The 3D reconstruction processes for multi-view stereo vision and binocular
vision are similar, the biggest difference is that SfM uses redundancy overlapping images to get
camera position parameters, and binocular vision uses a traditional calibration method, calibration,
matching, and 3D reconstruction. Although the image produced by multi-view vision is more accurate,
its calibration and synchronization, including camera location mainly, are more complicated than those
of a binocular camera.
SfM and MVS have a sequential order: SfM is used to determine camera poses, intrinsic parameters
calibration and start feature matching, then MVS is used to reconstruct the dense 3D scene.
Structure-from-motion technology (SfM) is a distance imaging technology that estimates a 3D structure
by capturing a series of 2D images at different locations in a scene, whose model includes incremental,
global, and hybrid structures, then it applies a highly redundant image feature and matches the
3D positions of features based on the scale-invariant feature transform (SIFT) algorithm (or using
SURF, ORB algorithm). After estimating camera pose and extracting the points cloud (using Bundler),
MVS technology is used to reconstruct a complete 3D object model from a suite of images taken from
known camera locations after calibrating cameras [18], which uses the method of polar geometric
constraint that sees whether they are consistent with a common popular geometry to match each pixel
(clustering views for multi-view stereo (CMVS), patches-based multi-view-stereo (PMVS2) algorithms
and et al.). Some open source software for MVS are shown in Table 2.

Table 2. Open source software for multi-view stereo technology (MVS).

Project Colmap GPUlma + Fusibile HPMVS MICMAC MVE OpenMVS PMVS


Language C++ CUDA C++ CUDA C++ C++ C++ C++ CUDA C++ CUDA
CUDA: Compute Unified Device Architecture.

SfM generally produces sparse point clouds and MVS photogrammetry algorithms are used to
increase the point density by several orders of magnitude. As a result, the combined workflow is more
correctly referred to as ‘SfM-MVS’ [19]. The steps of point cloud formation based on SfM-MVS generally
include feature detection, keypoint correspondence, identifying geometrically consistent matches,
structure from motion, scale and georeferencing, refinement of parameter values, and multi-view
stereo image matching algorithms. Some typical commercial integrated software for implementing
SfM-MVS are shown in Table 3.

Table 3. Commercial software for 3D scene modeling utilizing SfM-MVS.

Name Function Company


ContextCapture Create detailed 3D models quickly with simple photos Bentley Acute3D
Construct full-element, fine, textured three-dimensional mesh models from a
PhotoMesh SkyLine
set of standard, disordered two-dimensional photographs.
Enabling rapid and fully automatic process of images from any aerial or street
StreetFactory AirBus
camera for the generation of a 3D textured database and distortion-free imagery
Performing photogrammetric processing of digital images and generates 3D
PhotoScan AgiSoft
spatial data to be used in geographic information system (GIS) applications
Pix4DMapper Transform images in digital maps and 3D models. Pix4D
RealityCapture Extracts accurate 3D models from a set of ordinary images and/or laser scans RealityCapture

Scale and georeferencing are special steps for aerial maps. Output of the SfM stage is a sparse
unscaled 3D point cloud in arbitrary units along with camera models and poses, so correct scale,
orientation, or absolute position information need to be built according to known coordinates.
Three methods can be used to enable accurate scale and georeferencing of the imagery. One is using
a minimum of three ground control points (GCPs) with XYZ coordinates to scale and georeference
Agriculture 2020, 10, x FOR PEER REVIEW 5 of 26

methods can be used to enable accurate scale and georeferencing of the imagery. One is using a
Agriculture 2020, 10, 462 5 of 27
minimum of three ground control points (GCPs) with XYZ coordinates to scale and georeference the
SfM-derived point cloud [20]. Orientation can be measured from an Inertial Measurement Unit (IMU)
[21] SfM-derived
the and it can be point performed
cloudfrom [20]. known cameracan
Orientation positions derivedfrom
be measured froman RTK-GPS
Inertial measurements
Measurement
[22]. (IMU)
Unit On the[21] other
andhand,
it canthe be metric
performed scaling from factor
known wascamera
derivedpositions
throughderived
the known fromvalue
RTK-GPS of a
geometrical feature in the point cloud for small-scale plant measurement
measurements [22]. On the other hand, the metric scaling factor was derived through the known value without unmanned aerial
systems
of (UAS), and
a geometrical featurerawinpoint cloud
the point are multiplied
cloud for small-scale by aplant
scalemeasurement
factor that is without
the ratiounmanned
of the feature in
aerial
millimeters
systems and and
(UAS), in therawpixel system
point cloudofare themultiplied
raw pointby cloud, which
a scale factor will determine
that is the ratio an of
individual
the feature scale
in
factor for every
millimeters and point
in thecloud
pixel [23].
system of the raw point cloud, which will determine an individual scale
factorSfM
for can
everybepoint
appliedcloudto large-scale
[23]. plant measurement. Unmanned aerial systems are necessary
pieces of auxiliary equipment for large-scale
SfM can be applied to large-scale plant measurement. experimental Unmanned
field measurement based on
aerial systems areSfM-MVS.
necessary
Imagesofare
pieces acquired
auxiliary autonomously
equipment based on presetting
for large-scale experimental UAS parameters
field measurement and camera
based on settings, then
SfM-MVS.
point cloud
Images data are
are acquired generated bybased
autonomously someoncommercial
presetting UAS software for 3Dand
parameters scene modeling.
camera settings,Then plant
then point
height,data
cloud density, and etc. was
are generated bycalculated
some commercialafter point cloud processing.
software for 3D scene Formodeling.
example, Malambo
Then plant [20] used
height,
a DJI ® Phantom
density, and etc. 3was to acquire
calculated images and
after 6 or cloud
point more portable
processing. GCPs Forwere placed Malambo
example, uniformly[20] in the field
used a
and®measured
DJI Phantomusing a Trimble
3 to acquire imagesGeoXH andGPS 6 or system for scale
more portable and georeferencing,
GCPs were placed uniformly 100 readings
in the were
field
taken
and per point
measured andadifferentially
using Trimble GeoXH post-processed
GPS system for usingscaleTrimble’s Pathfinder Office
and georeferencing, software
100 readings were to
achieve
taken percentimeter
point andaccuracy (<10 post-processed
differentially cm), and Pix4Dcapture software
using Trimble’s based on Office
Pathfinder SfM was used totogenerate
software achieve
a point cloud,
centimeter then point
accuracy (<10 cm),cloud andwas processed to
Pix4Dcapture obtain based
software maizeon height.
SfM was SfMusedcan to also be applied
generate a pointto
small-scale
cloud, then plant
point measurement.
cloud was processed Rose [23] used Pix4DMappe
to obtain maize height. based
SfM on canSfM-MVS to reconstruct
also be applied single
to small-scale
tomato
plant plants, and extracted
measurement. Rose [23] mainusedstem height andbased
Pix4DMappe convexonhull from theto3D
SfM-MVS point clouds.
reconstruct single tomato
plants, and extracted main stem height and convex hull from the 3D point clouds.
2.3. Time of Flight Technology
2.3. Time of Flight Technology
Time of Flight (ToF) is a high-precision ranging method. ToF cameras and LiDAR (light
Time
detection and of Flight (ToF)scanning
ranging) is a high-precision
are based rangingon Timemethod.of FlightToF cameras and
technology. TheLiDAR
imaging (light detection
principles of
and ranging)
ToF can scanning
be divided intoare based on Time
pulsed-wave of Flight
(PW-iToF) technology. The imaging
or continuous-wave (CW-iToF) principles
modulation of ToF canThe
[24]. be
divided
ToF imaginginto pulsed-wave
principle is shown(PW-iToF) or continuous-wave
in Figure 2. CW-iToF emits (CW-iToF) modulation
near-infrared (NIR)[24].
lightThe ToF imaging
through a light-
principle
emitting diodeis shown
(LED), in which
Figurereflects
2. CW-iToF
back to emits near-infrared
the sensor. Each pixel (NIR) light
on the through
sensor a light-emitting
samples the amount
diode
of light(LED),
reflectedwhich reflects
by the scenebackfourto the in
times sensor.
equal Each pixelper
intervals oncycle
the sensor
(such as samples
m0, m1,the m2, amount
and m3). of
light reflected
The phase by the scene
difference, four times
offset value, in equal intervals
and amplitude are sampledper cycle (such as m0,
by comparing m1, m2, andphase
the modulation m3).
The
withphase difference,signal
the transmitted offset phase,
value, and the amplitude
target depthare sampled by comparing
is calculated based on the these modulation phase
three quantities.
with the transmitted
PW-iToF signal phase,
uses a transmitting moduleandto thetransmit
target depth a laseris calculated
pulse (Tpulse), basedwhile
on these three
at the samequantities.
time, a
PW-iToF uses awhich
shutter pulse, transmitting
has themodule
same timeto transmit
lengthawith laserTpulse,
pulse (Tpulse),
is activatedwhileby at the
the transfer
same time, a shutter
gate (TX1).
pulse,
When which has thelaser
the reflected same time
hits thelength
detector,with theTpulse,
charges is are
activated by the
collected. Aftertransfer
the firstgate (TX1).pulse
shutter When the
ends,
reflected
the second laser hits the
shutter detector,
pulse the charges
is activated are collected.
by the transfer gate After(TX2).
the firstThe shutter
charge pulse ends, the second
is integrated in the
shutter
according pulse is activated
storage node ofbytwo theshutters
transfer and gatethe(TX2).
targetThedepth
charge is is integrated
calculated in the
based onaccording
accumulationstorage of
node
charge of[24].
two shutters and the target depth is calculated based on accumulation of charge [24].

(a) (b)

Figure 2. Principle oftime


Principle of timeofofflight
flightimage
image collection:
collection: (a)(a) distance
distance measurement
measurement based
based on continuous-
on continuous-wave
wave modulation;
modulation; (b) distance
(b) distance measurementmeasurement
based onbased
pulsedonmodulation.
pulsed modulation.

2.3.1. Time of
2.3.1. Time Flight Cameras
of Flight Cameras
Time of Flight cameras are part of a broader class of scannerless LiDAR, in which the entire
scene is captured with each laser pulse, as opposed to point-by-point with a laser beam, such as
Agriculture 2020, 10, 462 6 of 27

in scanning LIDAR systems [25]. Typical cameras using ToF technology are SR-4000, CamCube,
Kinect V2, etc., whose structural parameters are shown in Table 4. An important issue for ToF
cameras is the wrapping effect, which is the distances to objects that differ 360◦ in phase and are
indistinguishable. Multiple modulated frequencies and lowering the modulation frequency can solve
the issue by increasing the unambiguous metric range [26]. Hu et al. [27] proposed an automatic
system for leaflet non-destructive growth measurement based on a Kinect V2, which uses a turntable
to obtain a multi-view 3D point cloud of the plant under test. Yang Si et al. [28] used a Kinect V2 to
obtain the 3D point cloud depth data of vegetables in seedling trays. Vázquez–Arellano [29] estimated
the stem position of maize plant clouds, calculated the height of individual plants, and generated
a plant height profile of the rows using a Kinect V2 camera in a greenhouse. Bao [30] used Kinect
V2 to obtain 3D point cloud data under field conditions, and a point cloud processing pipeline was
developed to estimate plant height, leaf angle, plant orientation, and stem diameter across multiple
growth stages. A branch 3D skeleton extraction method based on an SR4000 was proposed by Liu [31]
to reconstruct a 3D skeleton model of the branches of apple trees, and an experiment was carried out
in Fruit Tree Experimental Park; Skeletonization is the process of calculating a thin version of a shape
to simplify and emphasize the geometrical and topological properties of that shape, such as length,
direction, or branching, which are useful for the estimation of phenotypic traits. Hu [32] used the
SR4000 camera to acquire a plant’s 3D spatial data and construct a 3D model of poplar seedling leaves,
then calculated leaf width, leaf length, leaf area, and leaf angle based on the 3D models.

Table 4. Depth camera comparison based on time of flight (ToF).

IFM Efector 3D
Camera CAMCUBE 3 SR-4000 Kinect V2
(O3D303)
PMD Technologies
Manufacturer Mesa Imaging AG Microsoft IFM
GmbH
Continuous-wave Continuous-wave Continuous-wave Continuous-wave
Principle
modulation modulation modulation modulation
V (vertical) × H
40◦ × 40◦ N.A. 70◦ × 60◦ 60◦ × 45◦
(horizontal) field of view
Frame rate and
40 fps, 200 × 200 54 fps, 176 × 144 30 fps, 512 × 424 40 fps, 352 × 264
depth resolution
Measurable range (m) 0.03–7.5 0.03–7.5 0.5–5 0.03–8
Focal length (m) 0.013 0.008 0.525 N.A.
Signal wavelength (nm) 870 850 827–850 850
Not affected by
Strong resistance to light, detection of
High precision and Rich development
Advantages ambient light, scenes and object
light weight resource bundle
high precision without 3D images
of motion blur
Low measurement
accuracy;
Not for outdoor
Disadvantages High cost not suitable for High cost
light
very close object
recognition

A key advantage of time-of-flight cameras is that only a single viewpoint is used to compute
depth. This allows robustness to occlusions and shadows and preservation of sharp depth edges [33].
The main disadvantages of Time of Flight are low resolution, and not being to able to be operated
under strong sunlight, being disturbed by other’s ToF cameras, and short distance measurement.

2.3.2. LiDAR Scanning Equipment Based on ToF


Light detection and ranging (LiDAR) was developed in the early 1970s to monitor the earth [34].
LiDAR can be divided into aerial and terrestrial LiDAR. As aerial LiDAR laser scanning is mainly
used for 3D data measurement of glaciers, forests, and land, the effect resolution is low in plant
phenotypical analysis, so terrestrial LiDAR scanning is mainly used in 3D plant scanning. Terrestrial
Agriculture 2020, 10, 462 7 of 27

LiDAR (T-LiDAR) scanners can be divided into phase-shift T-LiDAR and pulse-wave T-LiDAR.
T-LiDAR estimates time by the phase shift between the continuous emission and the receipt of the
laser beam, making it ideal for measuring high-precision and relatively close scenarios. Time-of-flight
T-LiDAR is based on calculating the time between emitting and receiving laser pulses to estimate the
distance, which is suitable for scenarios with large distances. The specification parameters of partial
low-cost devices T-LiDAR for measurements of a plant canopy are shown in Table 5.

Table 5. Low-cost T-LiDAR (Terrestrial LiDAR) scanners specifications.

Velodyne FARO
Performance UTM30LX LMS291-S05
LMS 111 [35] HDL64E-S3 Focus 3D X 330
Parameters [36,37] [38]
[39] HDR [40]
Measurement
0.5–20 0.1–30 0.2–80 0.02–120 0.6–330
range (m)
Field of view
26.9◦ × 360◦ 300◦ × 360◦
(vertical × 270◦ (H) 270◦ (H) 180◦ (H)
(V × H) (V × H)
horizontal)
Laser
Infrared Infrared Infrared Infrared
Light source Semicon-ductor
(905 nm) (905 nm) (905 nm) (1550 nm)
(905 nm)
Scanning
25 40 75 20 97
frequency (Hz)
Angular
0.5 0.25 0.25 0.35 0.009
resolution (◦ )
Systematic
±30 mm N.A. ±35 mm N.A. ±2 mm
error
Statistical error ±12 mm N.A. ±10 mm N.A. N.A.
Class 1 (IEC Class 1 (EN/IEC Class 1
Laser class Class 1 Class 1
60825-1) 60825-1) (Eye-safe)
Weight (kg) 1.1 0.21 4.5 12.7 5.2
LiDAR
2D 2D 2D 3D 3D
specifications
N.A. indicates that data were not found.

LiDAR can be used for canopy measurement. Garrido [35] used portable LiDAR LMS 111 to
reconstruct a maize 3D structure under greenhouse conditions, which can help the aim of developing a
georeferenced 3D plant reconstruction. Yuan [38] developed a detection system to measure the tree
canopy structure by LiDAR UTM30LX and the height and weight of artificial tree could be obtained by
the system. Qiu [39] used LiDAR Velodyne HDL64E-S3 to get depth-band histograms and horizontal
point density, using the data to recognize and compute the morphological phenotype parameters (row
spacing and plant height) of maize plants in the experimental field. Jin [40] used LiDAR FARO Focus
3D X 330 HDR to get maize point cloud data, and realized stem-leaf segmentation and phenotypic trait
extraction in an experiment carried out in the Botany Garden.

2.4. Structured Light Technology and Equipment


Structured light is an active imaging technology. The projector projects a series of light sequences,
or patterns consisting of many stripes at once or of arbitrary fringes onto the object, and the light
sequence is deformed on the object. Then, the camera shoots the object in another direction and extracts
the deformation of its stripe shape and stripe width to obtain depth data. The method is shown in
Figure 3.
Agriculture 2020, 10, 462 8 of 27
Agriculture 2020, 10, x FOR PEER REVIEW 8 of 26

Figure 3. The principle of structured light.


Figure 3. The principle of structured light.
A structured light 3D scanner has some advantages. A structured light scanner can produce
A structured
highly accurate light resolution
results, 3D scannerishas some high,
typically advantages.
the images A structured
captured light scannerdetermine
can reliably can produce the
highly accurate results, resolution is typically high, the images captured
dimensions of the object, and it is often fast. 3D imaging can occur practically as fast as an image can reliably determine the
dimensions
can be taken. of Structured
the object, and lightit scanner
is often fast.
imaging3D imaging
systemscan have occur practically
a better as fast ascoverage
measurement an imagearea can
be taken. Structured light scanner imaging systems have a better
than other 3D imaging techniques, as long as the distance is fixed. This is particularly useful for measurement coverage area than
other
larger 3Dparts imaging
that need techniques,
multiple as longfurther
scans, as the saving
distance timeis fixed. This is efficiencies
and creating particularlyinuseful for larger
production [33].
parts that need multiple scans, further saving time and creating efficiencies
Major drawbacks of the sequential projection techniques include its inability to acquire the 3D object in in production [33]. Major
drawbacks
dynamic motion of theorsequential projection
in a live subject such as techniques
human body include
parts. itsAnother
inabilitylimitation
to acquire the 3D
is that the object in
reflected
dynamic
pattern is motion
sensitiveortoinoptical
a live subject such from
interference as human body parts. so
the environment, Another limitation
it is suitable is that the
for indoors. Thereflected
general
pattern is sensitive to optical interference from the environment, so
process for 3D reconstruction based on structured light is as follows: camera and projector calibration, it is suitable for indoors. The
general process for 3D reconstruction based on structured light is as follows:
projector calibration includes intensity calibration to build the relationship between the actual intensity camera and projector
calibration,
of the projected projector
pattern calibration
and imageincludes intensity
pixel value, calibration
geometric to build
calibration the relationship
to build the relationship between
betweenthe
actual intensity of the projected pattern and image pixel value,
point of 3D space and projector [41], projecting patterns and finding correspondences to estimate geometric calibration to build the
relationship between point of 3D space and projector [41],
parameter matrix between pixel and point of 3D space, obtaining a 3D point cloud based on the projecting patterns and finding
correspondences
parameter matrix to of estimate parameter
the structured light matrix
camera,between pixelout
and to carry and3D point of 3D space, obtaining a 3D
reconstruction.
pointChené
cloud et based on the parameter matrix of the structured
al. [42] used Kinect V1 to measure leaf curvature, morphology, light camera, and to andcarry out 3D
orientation.
reconstruction.
Azzari et al. [43] used Kinect V1 to obtain the point cloud data of the plant, and then constructed
Chené et
the canopy al. [42] used
structure of theKinect
plant V1 to measure
to obtain leaf curvature,
the plant diameter and morphology,
height. Nguyen and orientation.
et al. [44] Azzari
used a
et al. [43] used Kinect V1 to obtain the point cloud data of the plant,
combination of structured light and a multi-camera to extract plant (cabbage, cucumber, tomato) and then constructed the canopy
height,
structure
leaf area, of and the plant
total to obtain
shaded area.the plant
Syed et diameter
al. [45] usedandRealsense
height. Nguyen SR300 to et al. [44] the
obtain used a combination
color and depth
of structured
data of the plantslight(pepper,
and a multi-camera
tomato, cucumber, to extractandplant (cabbage,
lettuce), with the cucumber, tomato) height,
key characteristics leaf area,
of the seedlings
and total shaded area. Syed et al. [45] used Realsense SR300 to
obtained through a series of algorithms; the processing speed was also fast. Vit [46] comparedobtain the color and depth data of the
the
plants (pepper, tomato, cucumber, and lettuce), with the key characteristics
following sensors: Kinect II, Orbbec Astra, Intel RealSense SR300, and Intel D435; and experiments of the seedlings obtained
through
showed that a series of algorithms;
the Intel D435 sensor theprovided
processing thespeed was alsofor
best accuracy fast. Vit [46] compared
measuring the average thediameter
following of
sensors: Kinect II, Orbbec Astra, Intel RealSense SR300, and Intel D435;
maize stems. Liu [47] proposed a recognition algorithm for citrus fruit based on RealSense. The method and experiments showed that
the Intel D435
effectively usedsensor provided
depth-point the data
cloud best accuracy
got from for measuring
RealSense F200the in average
a close-shotdiameter
rangeof ofmaize
160 mm stems.
and
Liu [47] proposed a recognition algorithm for citrus fruit based on RealSense.
different geometric features of the citrus fruit and leaves to recognize fruits with an intersection curve The method effectively
used
cut bydepth-point
the depth-sphere.cloud data got [48]
Milella fromusedRealSense F200 in aR200
the RealSense close-shot range ofto160
depth camera mm andandifferent
construct in-field
geometric
high throughput grapevine phenotyping platform that can estimate canopy volume and detectcut
features of the citrus fruit and leaves to recognize fruits with an intersection curve grapeby
the depth-sphere. Milella [48] used the RealSense R200 depth
bunches under field condition. And some structured light depth cameras specifications are showncamera to construct an in-field high
throughput
in Table 6. grapevine phenotyping platform that can estimate canopy volume and detect grape
bunches under field condition. And some structured light depth cameras specifications are shown in
Table 6.
Agriculture 2020, 10, 462 9 of 27

Table 6. Depth camera comparison based on structured light.

Performance Occipital
Kinect V1 RealSense SR300 Orbbec Astra
Parameters Structure
Measurable range (m) 0.5–4.5 0.2–2 0.6–8 0.4–3.5
V × H field of view 57◦ × 43◦ 71.5◦ × 55◦ 60◦ × 49.5◦ 58◦ × 45◦
Frame rate and depth
30 fps, 320 × 240 60 fps, 640 × 480 30 fps, 640 × 480 60 fps, 320 × 240
resolution
Price ($) 199 150 150 499
Size (mm) 280 × 64 × 38 14 × 20 × 4 165 × 30 × 40 119.2 × 28 × 29

2.5. Comparison of Main Measurement Technologies


Table 7 summarizes the devices’ technology differences of stereo vision, SfM, Time of Light,
LiDAR scanning, and structured light in 6 aspects. The numbers of plus and minus are intensity
of advantage.

Table 7. Summarize the advantages and disadvantages of each technology.

Category Advantages Disadvantages


(1) Get depth image quickly and (1) Affected by scene lighting
plant’s slight movement does not (2) High computer performance
affect the precision and complicated algorithm
Binocular stereo vision technology [49] (2) Low cost (3) Complex 3D scene
(3) Obtains deep and color data at reconstruction
the same time (4) Not for homogeneous color
(4) No further auxiliary equipment (5) False boundary problem
(1) Operates easily and low cost
(2) Open source and commercial
(1) Not suitable for real-time
Structure-from- motion technology [50] software for 3D reconstruction
applications
(3) Suitable for aerial applications,
excellent portability
(1) No external light (1) Poor depth resolution
Time-of-flight technology [49,51] (2) Single viewpoint to (2) Not work in bright light
compute depth (3) Short distance measurement
(1) Fast image collection (1) Poor edge detection (3D point
(2) Can work at night clouds of edges of plant organs
(3) Can work in severe weather like leaves, for instance, are blurry)
LiDAR scanning technology (rain, snow, fog, etc.) for advanced (2) Needs warm-up time
laser scanning (3) Need for movement to obtain
(4) Works over long distances the depth data of the detected
(more than 100 m) object
(1) Accuracy and high depth
resolution (1) Indoor plant imaging
Structured light technology
(2) Get depth image quickly (2) Stationary object
(3) Captures large area

3. Plant Canopy Structure Measurement Based on 3D Reconstruction


Plant canopy structure measurement based on 3D reconstruction main flows include 3D plant
data acquisition, point cloud processing, 3D plant reconstruction, plant segmentation, plant canopy
structure parameters extraction. The processes are shown in Figure 4.
Agriculture 2020, 10, x FOR PEER REVIEW 10 of 26

Agriculture 2020, 10, 462 10 of 27


Agriculture 2020, 10, x FOR PEER REVIEW 10 of 26

Figure 4. Flow chart of plant canopy structure measurement based on 3D reconstruction.

3.1. 3D Plant Data Acquisition


Figure 4. Flow chart of plant canopy structure measurement based on 3D reconstruction.
4. Flow chart ofdisplayed
plant canopy structure
Plant Figure
3D data are mainly using depthmeasurement
maps [52,53], based on 3D meshes
polygon reconstruction.
[54], voxels [55–
58],
3.1. and 3D point clouds [44]. The presentation of data types is shown in Figure 5. Among them, the
3.1. 3D Plant Data Acquisition
depth map is a 2D picture, and each pixel value records the distance from the camera viewpoint to
Plant
Plant 3D
the surface ofdata
3D the are
aremainly
mainlydisplayed
dataobstruction. displayed
A polygon using
using
mesh,depth
depth
alsomaps
maps
called[52,53],
[52,53], polygon
polygon
an unstructured meshes
meshes
mesh, [54], avoxels
is[54], [55–58],
voxels
collection [55–
of
and
58], 3D
andpoint
vertices and clouds
3D point
polygons [44].
clouds The presentation
[44]. of data
The presentation
representing polyhedron oftypes is in
data types
shapes shown
3D in Figure
is shown
computer 5.graphics,
in FigureAmong them,
5. Amongconsistingthe
them, depth
ofthe
a
map
depth is a 2D
map picture,
is a 2D and each
picture, pixel
and value
each records
pixel value the distance
records the from
distancethe
series of convex polygon vertices and convex polygon surfaces [59]. Polygon meshes are intended tocamera
from viewpoint
the camera to the surface
viewpoint to
of
thethe obstruction.
surface
represent 3Dofobject A polygon
the obstruction.
models in amesh,
Away also is
polygon
that called
mesh, an unstructured
also
easy-to-render. calledAan voxel mesh,
[60],iswhich
unstructured a collection
mesh, isofa vertices
collection
is an abbreviation andof
for
polygons representing polyhedron shapes in 3D computer graphics, consisting
volume cell and is similar to a pixel of 2D space, is the smallest unit of digital data in the 3D spacea
vertices and polygons representing polyhedron shapes in 3D computer of
graphics, a series of
consisting convex
of
polygon
series of vertices
partition. convex and
Voxelization convex
polygon a polygon
isvertices andsurfaces
standardizedconvex [59]. Polygon
polygon
representation meshes
surfaces
method [59].are intended
Polygon
that is used to in
meshesrepresent
theare field 3D of
intended object
3Dto
models
represent
imaging. inAa way
3D point that
object is easy-to-render.
models
cloud is in
a adata
wayset
thatAisvoxel
of [60],
in awhich
easy-to-render.
points certain is voxel
A an abbreviation
[60], which
coordinate for
systemis anvolume cell and3D
abbreviation
that includes is
for
similar
volume to a
cellpixel
and of
is 2D space,
similar to is
a the smallest
pixel of 2D
coordinates, color, size value, segmentation results, etc. unit
space, of digital
is the data
smallest in the
unit 3Dof space
digital partition.
data in Voxelization
the 3D space
is a standardized
partition.
3D point representation
Voxelization
cloud is acan
data method
standardized thatrepresentation
be obtained isbyused in the field
a visual method
sensor of 3D imaging.
that
based is used
on A point
in the
binocular cloudfield
stereo is vision
aofdata
3D
set of points
imaging.
technology, in a certain
A multi-view
point cloud coordinate datasystem
set ofthat
is a technology,
vision includes
points
SfM in a 3D
technology, coordinates,
certainToF coordinate color,system
technology, size so
and value,
thatThe
on. segmentation
includes
details3D of
results, etc.
coordinates, color, size value, segmentation results, etc.
the technical principle and camera specifications are shown in Section 2.
3D point cloud data can be obtained by a visual sensor based on binocular stereo vision
technology, multi-view vision technology, SfM technology, ToF technology, and so on. The details of
the technical principle and camera specifications are shown in Section 2.

Figure 5. Data type: (a)


(a) depth
depth maps,
maps, (b)
(b) polygon
polygon meshes,
meshes, (c)
(c) voxels,
voxels, and (d) 3D point clouds.

3DPlant
3.2. 3D pointCanopy
cloud data Pointcan be obtained
Clouds by a visual sensor based on binocular stereo vision technology,
Preprocessing
multi-view vision
Figure technology,
5. Data SfM technology,
type: (a) depth ToF technology,
maps, (b) polygon meshes, (c)and so on.
voxels, and The details
(d) 3D point of the technical
clouds.
Modeling using point cloud data is fast and has finer details than polygon meshes and voxels,
principle and camera specifications are shown in Section 2.
which is valuable for agricultural crop monitoring. However, point clouds cannot be used directly
3.2. 3D Plant Canopy Point Clouds Preprocessing
for
3.2. 3D
3D applications,
Plant Canopy Pointthey Clouds
need toPreprocessing
be processed first because of wrongly assigned points and no-
Modeling
interest points, using
whichpoint are notcloud data isbetween
matching fast andpixel
has finer
pointdetails than corresponding
and actual polygon meshes and voxels,
object, or it is
which Modeling
is using
valuable for point cloud
agricultural data
cropis fast and
monitoring. has finer
However,details than
point polygon
clouds
background and no target object. 3D point cloud preprocessing in general includes background meshes
cannot be and
used voxels,
directly
which
for 3Disapplications,
valuable
subtraction, outlier forremoval,
agricultural
they need andto crop
be monitoring.
processed
denoising However,
[61]. first
At point
because
present, clouds
of wrongly
there are manycannot besource
assigned
open used
pointsdirectly
and no-
resourcesfor
3D applications,
interest points,
available for point they
which need
cloud to be processed
areprocessing.
not matching first
between
Table because of
pixel point
8 introduces wrongly
someand assigned points
actual corresponding
functions and
of open sourceobject,no-interest
point or it is
cloud
points, which
background are
and not
no matching
target between
object.
processing libraries and open source software. 3D pixel
point point
cloudand actual corresponding
preprocessing in object,
general or it
includes is background
background
and no target
subtraction, object.
outlier 3D point
removal, andcloud preprocessing
denoising in general
[61]. At present, thereincludes
are many background
open source subtraction,
resources
outlier removal,
available for pointand denoising
cloud [61]. At
processing. present,
Table there aresome
8 introduces manyfunctions
open source resources
of open sourceavailable
point cloudfor
point cloud processing. Table 8 introduces
processing libraries and open source software. some functions of open source point cloud processing
libraries and open source software.
Agriculture 2020, 10, 462 11 of 27

Table 8. Introduction of open source libraries and software for point cloud processing.

Type Name Function Reference URL


Large cross-platform open-source C++ programming library
providing a full set of point cloud data processing modules to
Point Cloud Library http://pointclouds.org/
implement a large number of general point-cloud-related algorithms
and efficient data structures
C++ BSD (the Berkeley software distribution) library for translation
Point Data Abstraction Library https://pdal.io/
and manipulation of point cloud data
Liblas Libraries for reading and writing plain LiDAR formats https://liblas.org/
Data organization library for a large number of point clouds,
Entwine designed to manage hundreds of millions of point and desktop-scale https://github.com/connormanning/entwine/
Open source library point clouds
Data organization library that generates data for data used in Potree
PotreeConverter https://github.com/potree/PotreeConverter
(a large network-based point cloud renderer) network viewer
Paraview Multi-platform data analysis and visualization application https://www.paraview.org/
Open source for unstructured 3D triangular mesh processing and
Meshlab http://meshlab.sourceforge.net/
editing; portable and scalable system
CloudCompare 3D point cloud and grid processing software open source project http://www.danielgm.net/cc/
Multi-platform application and programming framework designed
OpenFlipper http://www.openflipper.org/
to process, model, and render geometric data
PotreeDesktop Desktop/portable version of the web-based point cloud viewer Potree https://github.com/potree/PotreeDesktop
The first set of free point cloud data processing “point cloud cube”
software developed by the Chinese Academy of Sciences for remote
Point Cloud Magic sensing of the earth, LiDAR statistical parameters, extraction of http://lidar.radi.ac.cn/
Open source software vegetation height, biomass, etc., based on statistical regression
methods and single tree segmentation
Agriculture 2020, 10, x FOR PEER REVIEW 12 of 26

height, biomass, etc., based


on statistical regression
Agriculture 2020, 10, 462 methods and single tree 12 of 27
segmentation

3.2.1. Background
3.2.1. Background Subtraction
Subtraction
To obtain
To obtain only
only thetheplant
plantcanopy,
canopy,ititisisnecessary
necessary to to separate
separate thethe plant
plant pointpoint cloud
cloud areaarea
fromfrom
the
the ground, weeds, or other backgrounds after obtaining plant 3D point
ground, weeds, or other backgrounds after obtaining plant 3D point clouds data. When using activeclouds data. When using
active technology
image image technology (ToF technology,
(ToF technology, structured structured light technology,
light technology, and so on)and so on)
without without
color data tocolor
get
data to get 3D point clouds, detection of geometric shapes can be applied to
3D point clouds, detection of geometric shapes can be applied to remove the background. When using remove the background.
When using
passive imagepassive image technology
technology (binocular(binocular stereo
stereo vision vision technology,
technology, multi-view
multi-view vision vision technology,
technology, SfM
SfM technology, and so on), color thresholding or clustering with different color
technology, and so on), color thresholding or clustering with different color data can be applied to data can be applied to
remove background.
remove background.
Bao [13]
Bao [13] uses
uses the
the Random
Random Sample
Sample Consensus
Consensus (RANSAC)
(RANSAC) algorithm
algorithm to to fit
fit aa plane,
plane, and
and subtracts
subtracts
the background
the background whether
whether un-requiring
un-requiring the the distance
distance threshold
threshold value
value between
between data data point
point and
and defined
defined
plane. Klodt [62] used dense stereo reconstruction to analyze grapevine
plane. Klodt [62] used dense stereo reconstruction to analyze grapevine phenotyping, and phenotyping, and backgrounds
were segmented
backgrounds were with respect towith
segmented the color
respectandto depth information.
the color and depthHowever,
information.the low-level
However,geometric
the low-
shapes features cannot handle all types of meshes. Deep Convolutional
level geometric shapes features cannot handle all types of meshes. Deep Convolutional Neural Networks (CNNs) can
Neural
solve the problem and provide a highly accurate way to label the background,
Networks (CNNs) can solve the problem and provide a highly accurate way to label the background, using many geometric
features
using manyto train a labelfeatures
geometric model [63].
to train a label model [63].
Background subtraction
Background subtraction has has anan important
important application
application in in robotic
robotic weeding.
weeding. Plant Plant recognition
recognition forfor
automated weeding based on 3D sensors included preprocessing, ground
automated weeding based on 3D sensors included preprocessing, ground detection, plant extraction detection, plant extraction
refinement, and
refinement, and plant
plant detection
detection andand localization.
localization. Gai Gai [64]
[64] used
used Kinect
Kinect V2V2 to to obtain
obtain broccoli
broccoli point
point
clouds and RANSAC was used to remove ground Afterwards, 2D color
clouds and RANSAC was used to remove ground Afterwards, 2D color information was utilized to information was utilized to
compensate rough
compensate rough ground
ground error
error and
and clustering
clustering waswas applied
applied to to remove
remove weeding
weeding point point cloud,
cloud, and
and the
the
result after ground removal with RANSAC is shown in Figure 6. Andújar
result after ground removal with RANSAC is shown in Figure 6. Andújar [65] used Kinect V2 for [65] used Kinect V2 for
volumetric reconstruction of corn, and canonical discriminant analysis (CDA)
volumetric reconstruction of corn, and canonical discriminant analysis (CDA) was used to predict was used to predict
weed classification
weed classification of of the
the system
system using
using weed
weed height.
height.

Figure
Figure 6.
6. Result
Result after
after ground
ground removal
removal with
with Random
Random Sample
Sample Consensus
Consensus (RANSAC)
(RANSAC) [64].
[64].

Outlier Removal
3.2.2. Outlier Removal and
and Plant
Plant Point
Point Clouds Noise Reduction

An outlier is a data point that differs significantly from other observations. Noisy data are with
a large amount of additional meaningless information
information data,
data, which
which arise
arise out of various physical
measurement processes and limitations of the acquisition technology [66], including being corrupted corrupted
or distorted, or having a lowlow signal-to-noise
signal-to-noise ratio
ratio data. Also, matching
data. Also, matching ambiguities
ambiguities and image
imperfection produced
imperfection producedby bylens
lensdistortion
distortionor or sensor
sensor noise
noise willwill
leadlead to outliers
to outliers and noise
and noise of cloud
of point point
cloud Outlier
data. data. Outlier detection
detection approaches
approaches are are classified
classified intointo distribution-based[67],
distribution-based [67],depth-based
depth-based [68],
[68],
clustering
clustering [69], distance-based
distance-based [70], and density-based approaches [71]. The moving least-squares
(MLS) generally
generally deals
deals with
with noise,
noise, which
which iteratively
iteratively projects
projects points
points on
on weighted
weighted least
least squares
squares fits of
their neighborhoods, thus causing the newly sampled points to lie closer to an underlying surface [72].
Wu et al. [73] used a statistical outlier removal filtering algorithm to denoise the point cloud,
which calculates the mean distance to the K neighboring points by K-neighbor searching method for
Agriculture 2020, 10, x FOR PEER REVIEW 13 of 26

their neighborhoods, thus causing the newly sampled points to lie closer to an underlying surface
[72].
Agriculture
Wu2020, 10, 462
et al. [73] used a statistical outlier removal filtering algorithm to denoise the point13cloud, of 27

which calculates the mean distance to the K neighboring points by K-neighbor searching method for
each point,and
each point, andremoving
removingoversize
oversizevalue.
value.Yuan
Yuanetetal.al.[38]
[38]used
usedstatistical
statisticaloutliers
outlierstotoremove
removeoutlier
outlier
point clouds around peanut point clouds. Wolff [74] designed a new
point clouds around peanut point clouds. Wolff [74] designed a new algorithm to remove noisy pointsalgorithm to remove noisy
points
and and from
outliers outliers
each from each per-view
per-view point cloud point
bycloud
checking by checking
if points ifarepoints are consistent
consistent with the
with the surface
surface implied by the other input views. Xia [75] combined the two characteristic
implied by the other input views. Xia [75] combined the two characteristic parameters of the average parameters of the
average distance of neighboring points and the number of points in the
distance of neighboring points and the number of points in the neighborhood to remove outlier noise, neighborhood to remove
outlier
and usednoise, and used
a bilateral a bilateral
filtering filtering
algorithm algorithm
to remove smalltonoiseremove small
in the pointnoise in the
cloud point cloud
of tomato plants.of
tomato plants. After performing point-wise Gaussian noise reduction, Zhou
After performing point-wise Gaussian noise reduction, Zhou et al. [76] used the grid optimization et al. [76] used the grid
optimization
method method
to optimize to optimize
the point the point
cloud data, cloud
and used thedata,
averageanddistance
used the average
method distanceredundant
to remove method to
remove redundant boundary points, thus obtaining a more realistic blade structure.
boundary points, thus obtaining a more realistic blade structure. Hu et al. [27] first used the multi-view Hu et al. [27]
first used the multi-view interference elimination (MIE) algorithm to reduce
interference elimination (MIE) algorithm to reduce layers and then used moving least squares (MLS) layers and then used
moving least
algorithm squares
to reduce the(MLS) algorithm
remaining to reduce the remaining local noise.
local noise.

3.3.3D
3.3. 3DPlant
PlantCanopy
CanopyReconstruction
Reconstruction

3.3.1.Plant
3.3.1. PlantPoint
PointClouds
CloudsRegistration
Registration
ToTomeasure
measurethe thecomplete
completedata datamodel
modelofofa aplant,
plant,the
thepoints
pointsobtained
obtainedfromfromvarious
variousperspectives
perspectives
arecombined
are combinedinto intoa aunified
unifiedcoordinate
coordinatesystem
systemtotoform
forma acomplete
completepoint pointcloud,
cloud,sosothe
thepoint
pointclouds
clouds
needstotobe
needs beregistered.
registered. The Thepurpose
purposeofofregistration
registration is is
to to
transform
transform thethe
coordinates
coordinates of the
of source
the source point
cloudcloud
point (initialized
(initializedthe point cloud)
the point andand
cloud) target point
target pointcloud
cloud(point
(pointcloud
cloudformed
formed bybythe
themotion
motionof
oftargeted
targetedobject),
object),and andobtain
obtain aa rotation (RTMatrix, RTRT)
rotation translation matrix (RTMatrix, RTRT) that
thatrepresents
representsthe the
positiontransformation
position transformationrelationship
relationshipbetween
betweensource
sourcepoint
pointcloud
cloudand andtarget
targetpoint
pointcloud.
cloud.Point
Pointcloud
cloud
registrationcan
registration canbebedivided
dividedintointorough
roughregistration
registrationandandprecise
preciseregistration.
registration.Rough
Roughregistration
registrationuses uses
rotationaxis
rotation axiscenter
centercoordinate
coordinateand androtation
rotationmatrix
matrixtotomake
makethe therigid
rigidtransformation
transformationofofpointpointclouds.
clouds.
Preciseregistration
Precise registration aligns
aligns twotwo
sets sets
of 3Dofmeasurements
3D measurements from geometric
from geometric optimization.
optimization. Iterative Iterative
closing
closing
point (ICP)point (ICP) algorithm
algorithm [77], Gaussian
[77], Gaussian mixture (GMM)
mixture models models algorithm
(GMM) algorithm
[78] and [78]
thinand
plate thin plate
spline
splinepoint
robust robust point (TPS-RPM)
matching matching algorithm
(TPS-RPM) [79]algorithm
are generally[79]used
are togenerally
make preciseused to make ICP
registration. precise
is
registration.
the most classic ICPandiseasy,
the most
which classic and easy,
iteratively whichthe
calculates iteratively
distance calculates
between the thecorresponding
distance between source the
corresponding
point cloud and source the target point cloud
point andconstructing
cloud, the target point cloud,translation
a rotation constructing a rotation
matrix translation
to transform the
matrix
source to transform
point the source the
cloud, and calculating point cloud,
mean squaredanderror
calculating
after the the mean squared
transformation error after
to determine if metthe
transformation
defined threshold.toJia determine if metthe
[80] performed defined
roughthreshold.
registrationJiaof[80]
plant performed
point cloudsthefrom
rough sixregistration
perspectivesof
planton
based point
the clouds
samplefrom six perspectives
consistent based on
initial alignment the sample
(SAC_IA). consistent
Precise initialuses
registration alignment
a known (SAC_IA).
initial
Precise registration
transformation matrix,uses
and ita obtains
knowna initial transformation
more accurate solution matrix,
through and it obtains The
ICP algorithm. a more accurate
principle of
solution
ICP algorithmthrough ICP algorithm.
is shown in Figure 7. The principle of ICP algorithm is shown in Figure 7.

Figure7.7.Iterative
Figure Iterativeclosest
closestpoint
point(ICP)
(ICP)algorithm:
algorithm:realize
realizethe
theregistration
registrationofofAAand
andB Bpoint
pointclouds.
clouds.

3.3.2. Plant Point Clouds Surface Reconstruction


According to the different principles of reconstruction surfaces, 3D point clouds surface
reconstruction can be divided into surface reconstruction based on Delaunay triangulation [81],
region-based growth surface reconstruction, and implicit surface reconstruction [82]. Among them,
the Delaunay triangulation and its improved method [83–85] can satisfy the consistency requirements
of the point cloud data topology, but the accuracy of surface reconstruction depends entirely on
3.3.2. Plant Point Clouds Surface Reconstruction
According to the different principles of reconstruction surfaces, 3D point clouds surface
reconstruction can be divided into surface reconstruction based on Delaunay triangulation [81],
region-based growth surface reconstruction, and implicit surface reconstruction [82]. Among them,
the Delaunay
Agriculture triangulation
2020, 10, 462 and its improved method [83–85] can satisfy the consistency requirements 14 of 27
of the point cloud data topology, but the accuracy of surface reconstruction depends entirely on the
density and quality of the point cloud. Region-based growth surface reconstruction can quickly
the density the
triangulate andoriginal
quality of the cloud
point point cloud. Region-based
to reconstruct the surfacegrowth by surface
projectingreconstruction
a 3D point can to a quickly
certain
triangulate the original point cloud to reconstruct the surface
normal plane, and then triangulating the point cloud obtained by the projection in the plane by projecting a 3D point to to
a certain
obtain
normal plane, and then triangulating the point cloud obtained by the projection
the connection relationship of the points. After triangulating the plane area, a triangular mesh surface in the plane to obtain
the connection relationship of the points. After triangulating the plane
is formed, and then a surface model is obtained according to the connection relationship [83]. The area, a triangular mesh surface is
formed, and then a surface model is obtained according to the connection
implicit surface reconstruction segments the data into regions for local fitting and further combine relationship [83]. The implicit
surfacelocal
these reconstruction
approximations segments using theblending
data intofunctions
regions for[86], localand
fitting andbetter
it has furthernoise
combine these local
immunity and
approximations using blending functions [86], and it has better noise
smoothness, but retaining the sharp features of the surface is difficult. Implicit surface reconstruction immunity and smoothness,
but retaining
includes the sharp
the radial basis features
function of the algorithm
(RBF) surface is difficult.
[87], pointImplicit
set surfacesurface
(PSS)reconstruction
algorithm [88],includes
unified
implicit multi-level partition of unity (MPU) algorithm [89], Poisson algorithm [88],
the radial basis function (RBF) algorithm [87], point set surface (PSS) algorithm [90], unified
algebraic implicit
point
multi-level
set partition
surface (APSS) of unity (MPU)
algorithm [91], etc. algorithm [89], Poisson algorithm [90], algebraic point set surface
(APSS)Jayalgorithm
[92] used [91], etc. triangulation to reconstruct the surface of cabbage to calculate the leaf
Delaunay
Jay [92] used Delaunay
area. Poisson surface reconstruction triangulation to reconstruct
is often used in plant the point
surface of cabbage
cloud surfacetoreconstruction,
calculate the leaf area.
where
Poisson surface reconstruction is often used in plant point cloud
the approximate surface is obtained by performing optimal interpolation processing on point cloud surface reconstruction, where the
approximate
data. Martinez surface
[93,94]is obtained
used the by performing
Poisson optimal
algorithm interpolation
in Meshlab processing
to perform foliaronreconstruction
point cloud data. of
Martinez [93,94] used the Poisson algorithm in Meshlab to perform
cauliflower leaves. Hu [95] searched for the points closest to the dense point cloud in the foliar reconstruction of cauliflower
vertices of
leaves.
the Hu [95]
Poisson searched
surface based for the Poisson
on the points closest to the dense
reconstruction point
surface. Thecloud in thedistance
obtained vertices was
of the Poisson
compared
with the distance threshold to determine the removal of the vertices of the Poisson surface andthe
surface based on the Poisson reconstruction surface. The obtained distance was compared with to
distance threshold
smooth to determine
the reconstructed the removal
cucumber, of the vertices
eggplant, and green of thepepper
Poissonsurfaces.
surface and to smooth
Poisson the
surface
reconstructed cucumber,
reconstruction cannot be eggplant, and greenplants
used for complex pepper orsurfaces. Poissonsosurface
plant canopies, Michael reconstruction
[96] proposed cannot
that
the boundary of each leaf patch can be refined using the level-set method, and demonstrated leaf
be used for complex plants or plant canopies, so Michael [96] proposed that the boundary of each the
patch can be refined using the level-set method, and demonstrated
effectiveness of the approach on the surface smoothing of the leaves of wheat and rice after the effectiveness of the approach on
the surface smoothing of the leaves of wheat and rice after reconstructing
reconstructing 3D point clouds of plants and scenes from multiple color input images. The 3D point clouds of plants and
scenes from multiple
reconstruction resultscolor
basedinput
on images.
Delaunay The reconstruction
triangulation, resultssurface
implicit based on Delaunay triangulation,
reconstruction algorithm,
implicit surface reconstruction
Poisson algorithm are shown in Figure 8. algorithm, Poisson algorithm are shown in Figure 8.

Figure 8. (a)
Figure 8. (a) Cabbage
Cabbage reconstruction
reconstruction based
based on
on Delaunay
Delaunay triangulation
triangulation [92];
[92]; (b)
(b) Tree reconstruction
Tree reconstruction
based on implicit surface reconstruction algorithm [97]; (c) Sugar beet reconstruction
based on implicit surface reconstruction algorithm [97]; (c) Sugar beet reconstruction based onbased on
Poisson
Poisson
algorithmalgorithm
[94]. [94].

3.4. Plant
3.4. Plant Canopy
Canopy Segmentation
Segmentation
Plant canopy
Plant canopy study
study is
is focused
focused onon canopy
canopy architecture,
architecture, leaf
leaf angle
angle distribution,
distribution, leaf
leaf morphology,
morphology,
leaf number,
leaf number, leafleaf size, and so
size, and so on, so plant
on, so plant leaf
leaf point
point cloud
cloud segmentation
segmentation is is necessary before
necessary before
morphological analysis. Plant segmentation is most difficult and important in plant
morphological analysis. Plant segmentation is most difficult and important in plant phenotypic phenotypic analysis,
because kinds
analysis, becauseof plant
kinds organ in organ
of plant different vegetation
in different is not similar,
vegetation is not which
similar,leads to leads
which the use
to of
thespecific
use of
methodsmethods
specific for different plant segmentation.
for different Three main
plant segmentation. varieties
Three mainofvarieties
range segmentation algorithms
of range segmentation
are edge-based
algorithms are segmentation,
edge-based surface-based
segmentation, segmentation,
surface-basedand scanline-based
segmentation, and segmentation [98].
scanline-based
The surface-based segmentation methods use local surface properties as a similarity measure and
merge together the points that are spatially close and have similar surface properties. Surface-based
segmentation is common for plant canopy segmentation and its key is obtaining features for clustering
or classification. Spectral clustering algorithm [99] can solve the segmentation problem of plant stem
and leaf where the centers and spreads are not an adequate description of the cluster, but the number of
Agriculture 2020, 10, 462 15 of 27

clusters must be given as input; Point Feature Histograms (PFH) [100] can better show descriptions of
a point’s neighborhood for calculating features. Seed region-growing algorithm [101] is also common
for segmentation, it examines neighboring features of initial seed points and determines whether
the point should be added to the region, so the selecting of initial seed point is important for the
segmentation result.
Paulus [102] proposed a new approach to the segmentation of plant stem and leaf, which applies
PFH descriptor into Surface Feature histograms (SFH) in order to make a better distinction, and new
descriptors were used as features for labels of machine learning to realize automatic classification.
Hu [27] used pot point data to construct a pot shape feature to define plane Sm and segmentation of
the plant leaf by whether the point’s projection is or not on plane Sm. Li [103] selected a suitable seed
point feature in the K-nearest neighborhood to cluster for coarse planar facer generation, then carried
out facet region growing by multiple coarse facers according to facet adjacency and the coplanarity
to accomplish leaf segmentation. Dey [104] used saliency features [105] and color data to obtain a
12-dimensional feature vector for each point, then used SVM to classify the point clouds of grape,
branches, and leaves according to obtained features. Gélard [106] decomposed 3D point clouds into
super-voxel and used the improved region growing approach to segment merged leaves.
Surface fitting benefits plant canopy segmentation, which is used to fit planes or flexible surfaces.
Non-uniform rational B-splines (NURBS) [107] algorithm is the general fitting plant leaf surface.
Hu et al. [32] proposed an angle of the two adjacent normal vectors method to remove redundant
points, and NURBS method was used to fit the plant leaf. Santos [108] used single hand-held to get
dense 3D point clouds by MVS technology, sunflower stem and leaf were segmented by spectral
clustering algorithm, and leaf surface was estimated using non-uniform rational B-splines (NURBS).

3.5. Plant Canopy Structure Parameters Extraction


Plant structure index is used to characterize growth quality, structural parameters, covering area,
and so on. It can be divided into the plant group canopy level [109], individual plant level [110],
and plant organ level [111]. The plant canopy plays important functional roles in cycling materials and
energy through photosynthesis and transpiration, maintaining plant microclimates, and providing
habitats for various taxa [112]. This paper only focuses on the plant group canopy level, which includes
leaf inclination angles, leaf area density, plant area density, etc.

3.5.1. Leaf Inclination Angles


The skeleton, also called the symmetry axis, is a useful structure-based object descriptor.
Extracting object skeletons directly from natural images can deliver important information about the
presence and size of objects. The skeleton segment [113] is often applied to leaf angle measurement.
Skeletonization is used to show the geometrical and topological properties of that shape. Bao [30] made
a skeleton segmentation for maize and filtered the skeleton nodes that satisfy suitable point-to-stem
distance, and leaf angle was computed using PCA and approximated by the first eigenvector of the
filtered nodes, the skeleton segmentation result is shown in (a) of Figure 9. As a result of leaf angle
stability, not change with zooming in or out, the leaf projected can be used to calculate the leaf angle.
Biskup [114] used leaf projected ROI (region of interest) for plane fitting to build a planar surface
model, which is obtained by the RANSAC algorithm and by analyzing the covariance matrix of the
outlier-free point cloud; the leaf angle was obtained corresponding to the dihedral angle between two
planes, the detailed is shown in (b) of Figure 9.
Agriculture 2020, 10, 462 16 of 27
Agriculture 2020, 10, x FOR PEER REVIEW 16 of 26

Figure
Figure 9. (a)
(a)Skeleton
Skeletonsegments
segmentsthat thatcontain
containboth
bothstems
stemsand
andleaves
leaves[30];
[30];(b)(b)3D3Dreconstruction
reconstructionof of
a
soybean
a soybeanleaf
leafconsisting
consistingofofthree
threeleaflets.
leaflets.Black
Blacklines:
lines:normal
normalvectors
vectorstotofitted
fittedplane;
plane; red
red contour:
contour:
projected region of interest (ROI) used for plane fitting [114].
[114].

3.5.2. Leaf
3.5.2. Leaf Area
Area Density
Density (LAD)
(LAD)
Leaf area
Leaf areadensity
density(LAD)(LAD) is defined
is defined as one-sided
as the the one-sided leaf arealeaf per
area perofunit
unit of horizontal
horizontal layer volumelayer
volume [115]. The leaf area index (LAI), which is defined as the leaf
[115]. The leaf area index (LAI), which is defined as the leaf area per unit ground area, is calculated area per unit ground area,
is calculated
by integratingbythe integrating
LAD overthe theLAD overheight.
canopy the canopy For LAD,height. For
leaf LAD,
area andleaf areavolume
plant and plant need volume
to be
need to be calculated by each layer voxel area, which is obtained
calculated by each layer voxel area, which is obtained by transferring point clouds into voxel-basedby transferring point clouds into
voxel-based three-dimensional
three-dimensional model. model.
For the
For the direct
direct calculating
calculating of of LAD,
LAD, Hosoi
Hosoi [116]
[116] proposed
proposed the the voxel-based
voxel-based canopy canopy profiling
profiling (VCP)
(VCP)
method to estimate tree LAD; data for each horizontal layer
method to estimate tree LAD; data for each horizontal layer of the canopy were collected from of the canopy were collected from
optimally inclined laser beams and were converted into a voxel-based
optimally inclined laser beams and were converted into a voxel-based three-dimensional model; then three-dimensional model;
then LAD
LAD and LAI andwere
LAI computed
were computed by counting
by counting the beam-contact
the beam-contact frequencyfrequency
in eachinlayer
eachusing
layer ausing
point-a
point-quadrat
quadrat method. method.
For the
For the measurement
measurement of of plant
plant volume,
volume, an an alpha
alpha shape
shape volume
volume estimation
estimation was was used
used to
to calculate
calculate
plant volume [117]. This algorithm estimates the concave hull around
plant volume [117]. This algorithm estimates the concave hull around point clouds and computes the point clouds and computes the
volume from there. Paulus [102] used an alpha shape volume
volume from there. Paulus [102] used an alpha shape volume estimation method for volume estimation method for volume estimation
and an accurate
estimation and andescription of the concave
accurate description of thewheat
concave ears with ears
wheat segmental point clouds,
with segmental pointthe detailed
clouds, the
presentation is shown in (a) of Figure 10. Hu [27] proposed a method
detailed presentation is shown in (a) of Figure 10. Hu [27] proposed a method based on tetrahedrons based on tetrahedrons to calculate
plant
to volume;
calculate tetrahedrons
plant were constructed
volume; tetrahedrons wereby down-sampled
constructed point cloud, distance
by down-sampled of anydistance
point cloud, two points of
should
any twobe smaller
points than be
should maximum edgemaximum
smaller than length of tetrahedrons,
edge length of and plant volume
tetrahedrons, andcan be calculated
plant volume can by
tetrahedrons
be calculated point space. When
by tetrahedrons the plant
point space.isWhen
reconstructed
the plantby is voxel grid or octree,
reconstructed the grid
by voxel volume can be
or octree,
estimated
the volumebycan adding up the volumes
be estimated by addingof allupthethe voxels covering
volumes of all thetheplant,
voxelsthe covering
detailed presentation
the plant, the is
shown in (b) of Figure 10. Chalidabhongse [118] made 3D
detailed presentation is shown in (b) of Figure 10. Chalidabhongse [118] made 3D mango mango reconstruction based on the space
carving method,
reconstruction andon
based eachtheprojected
space carving voxelmethod,
in the voxel
and each space onto thevoxel
projected all view
in theofvoxel
imagesspacewasontothe
approximation
the of the object
all view of images was the volume.
approximation of the object volume.
For leaf fitting using NURBS,
For leaf fitting using NURBS, leaf leafarea
areaisiscalculated
calculatedbyby thethesumsum ofofeacheach partial
partial area
area according
according to
fitting surface mesh. Santos [119] and Hu [32] used NURBS to calculate mint and poplars area, area,
to fitting surface mesh. Santos [119] and Hu [32] used NURBS to calculate mint and poplars and
andresults
the the resultswerewereveryvery accurate.
accurate. It isItrelatively
is relatively simple
simple to toget getthe
thewhole
wholeplant plantarea
area with
with needless
needless
segmentation, Bao [13] converted point clouds into triangle mesh,
segmentation, Bao [13] converted point clouds into triangle mesh, reconstructed surface with PCL, reconstructed surface with PCL,
and the
and the plant
plant surface
surface area
area waswas approximated
approximatedby bythethesumsumofofareas
areasofofall alltriangles
trianglesininthe themesh.
mesh.WhenWhena
voxel grid or octree reconstructs the plant, a sequential cluster
a voxel grid or octree reconstructs the plant, a sequential cluster connecting algorithm and connecting algorithm and subsequent
refinement steps
subsequent need to
refinement be carried
steps need toout to segment
be carried out to the leaf, then
segment thevoxel grid voxel
leaf, then or octree
gridisorconverted
octree is
converted into point cloud for piece-wise fitting of leaf planes [120]. Scharr [55] used volume to
into point cloud for piece-wise fitting of leaf planes [120]. Scharr [55] used volume carving make
carving
3D maize reconstruction and leaf area was calculated by a
to make 3D maize reconstruction and leaf area was calculated by a sequence of segmentation sequence of segmentation algorithms.
In addition, In
algorithms. theaddition,
marchingthe cubes algorithm
marching [121]
cubes can also [121]
algorithm calculatecan the
alsoarea of a voxel
calculate the or
areaoctree
of a by fitting
voxel or
a mesh surface.
octree by fitting a mesh surface.
Agriculture 2020, 10, 462 17 of 27
Agriculture 2020, 10, x FOR PEER REVIEW 17 of 26

Figure 10.
10. (a)(a)
A description of the
A description of concave wheatwheat
the concave ears with
ears segmental point clouds
with segmental point [102];
clouds(b)[102];
The
triangulation results of results
(b) The triangulation three different
of threesized plants,sized
different and the triangle
plants, andvertexes extracted
the triangle from triangular
vertexes extracted
mesh were usedmesh
from triangular as thewere
points
usedto as
construct tetrahedrons,
the points to constructwhich can be used
tetrahedrons, whichto can
calculate volume
be used [27].
to calculate
volume [27].
3.5.3. Plant Area Density (PAD)
3.5.3. Plant Area Density (PAD)
The notion of plant area density (PAD) is easy to understand, which is defined as canopy area
Theofnotion
per unit groundof plant area
area. So thedensity
device(PAD) is easy topoints
for generating understand,
of datawhich
needs is todefined as canopy area
have a broad-scale per
survey
unit ofand
range, ground area.handheld
as such, So the device for generating
laser scanner pointslaser
and airborne of data needs
scanner to have
(ALS) a broad-scale
remote sensing are survey
often
range, and as such, handheld laser scanner and airborne laser scanner (ALS) remote
used. As a result of large quantities of data for broad-scale plant area measurement, point cloud sensing are often
used. As a result of large quantities of data for broad-scale plant area measurement,
segmentation and reconstruction are complex and difficult, so PAD is estimated based on the VCP point cloud
segmentation
[116] method and reconstruction
by converting areclouds
point complex anda difficult,
into so PAD
voxel-based is estimated based
three-dimensional on theSong
model. VCP [122]
[116]
method by converting point clouds into a voxel-based three-dimensional
used an airborne laser scanner estimate tree PAD, and PAD was computed with the VCP method.model. Song [122] used an
airborne
Table laser
9 and scanner
Table estimate
10 shows the tree PAD, and PADofwas
3D reconstruction computed
plants and thewith the VCP
analysis of themethod.
structureTables
index9
and 10 shows the 3D reconstruction of
using single and multiple measurement methods.plants and the analysis of the structure index using single and
multiple measurement methods.
Table 9. Examples of RMSE for plant canopy 3D structure parameters measurement.
Table 9. Examples of RMSE for plant canopy 3D structure parameters measurement.
Black Palm tree Leafy
Cotton Sunflower Tomato Maize
RMSE Black
eggplant Palm Tree vegetable
seedling Leafy
Cotton
[123] Sunflower
[123] Tomato
[123] [30] [30]
RMSE Eggplant
[123] Maize Seedling
[124] Vegetable
[27]
[123] [123] [123]
[123] 0.058 [124] [27]
Plant height 1.7 cm 1.1 cm 1 cm 1.3 cm / 0.6957 cm
Plant m
1.7 cm 1.1 cm 1 cm 1.3 cm 0.058 m / 0.6957 cm
height
Leaf area
80 30 10 10 / 3.23 72.43
Leaf(cm 2)
area
2) 80 30 10 10 / 3.23 72.43
(cmLeaf
inclination
Leaf / / / / 3.455 2.68 /
angles (°)
inclination / / / / 3.455 2.68 /
(◦ )
Stem
angles
/ / / / 5.3 mm / /
diameter
Stem
Volume / / / / / / / / 5.3/ mm / / 2.522 /cm3
diameter
Volume / / Note: RMSE,
/ root mean square
/ error. / / 2.522 cm3
Note: RMSE, root mean square error.
Table 10. Examples of MAPE and R2 for plant canopy 3D structure parameters measurement.

LAD PAD
Table 10. Examples of Tree
MAPE MAPE:
and R2 17.2–55.3%
for plant canopy
[116] 3DR2:
structure parameters measurement.
0.818 [122]
Note: R2, Coefficient of determination, the ratio of the sum of the squared regression to the sum of the
LAD PAD
squared total errors is an index of the degree of fit of the trend line; MAPE, Mean absolute percentage
Tree MAPE: 17.2–55.3% [116] R2: 0.818 [122]
error.
Note: R2 , Coefficient of determination, the ratio of the sum of the squared regression to the sum of the squared total
errors is an index of the degree of fit of the trend line; MAPE, Mean absolute percentage error.
Agriculture 2020, 10, 462 18 of 27

4. Conclusions

4.1. Poor Standardization of Algorithms


There is a lot of variability of the appearance of different kinds of plants, and the analysis
method of reconstruction and segmentation aims to only specific plants, moreover it may apply
different algorithms for the same plant in different environments. In the flow of 3D plant data
acquisition, point cloud processing, 3D plant reconstruction, plant segmentation, and plant canopy
structure parameters extraction have multiple processing algorithms and do not have an optimal
criteria to build standards and specifications (such as labeling, naming, formatting, and integrity
constraints). The problems include large differences in format and accuracy, incomplete supporting
data, data redundancy, and low data use. The data from the plant organ layer to the individual plant
layer to the group canopy level are independent from each other in the study of 3D canopy structure,
and the matching characterization with plant physiological data (such as canopy photosynthesis data)
needs to be standardized [125]. For example, Delaunay triangulation, region-based growth surface
reconstruction, and implicit surface reconstruction can be used for plant reconstruction and have
different results.

4.2. 3D Reconstruction Operation Is Slow


The data processing speed can be influenced by the number of input points, which could be a
time-consuming problem for large-sized plants. When analyzing plant phenotypes on a large scale,
3D reconstruction takes longer and is less efficient due to the large number of objects to be analyzed.
The analysis shows that the 3D reconstruction effect of multi-view images is related to the number of
images. The higher the number of images, the better the reconstruction effect, but the corresponding
calculation amount also increases considerably [126], resulting in a time-consuming reconstruction
process. In addition to the speed improvement required by hardware, software algorithms are required
to speed up the calculation.
3D reconstruction speed has a direct relationship with point cloud data size, and rough and
fine reconstruction also take different times. Marton [127] used the triangulation method to make an
urban scene fast surface reconstruction, which needed 8.983 s with 65,646 points and reconstruction
of radiohead took 17.857 s with 329,741 points. Although 3D reconstruction takes little time,
generating dense and complete a 3D point cloud with multi-images will take a lot of time. The CMPMVS
software ran for around 182 min from 66 input images, and Lou [128] used an improved SfM method,
which ran for 15 min to produce the final 3D point cloud for the same images.

4.3. Plant 3D Reconstruction Is Inaccurate


Currently, plant analysis and reconstruction technology uses moment phenotype extraction
and lacks a monitoring of growth dynamics; however, monitoring of growth dynamics requires a
non-invasive time-lapse imaging system that supports accurate reconstruction of plant architecture
and most depth cameras or other devices provide only rough approximations of size, often lacking
high spatial or high temporal resolution [129]. In addition, the occlusion of the plant canopy structure
causes problems such as voids or holes, untextured areas, and blurred images in the final 3D models of
some plants. Therefore, occlusion problems should be avoided as much as possible during the image
collection process. Multi-view stereo reconstruction with multiple devices working together like laser
scanner and ToF camera has high accuracy for sheltered leaves and fruit plant reconstruction, but rapid
multi-view registration is difficult for achieving the high-throughput 3D phenotypic analysis.
Models that have been proposed thus far are still limited in their application because of sensitivity
to outdoor illumination conditions and the inherent difficulty in modeling complex plant shapes using
only radiometric information. Different plant or imaged environments also have a great reconstruction
performance difference with the same material and methods. In the 3D stereo model, the reconstruction
errors of corn, sunflower, black nightshade, and tomato are 5.7, 4.6, 5.2, and 4.7% in LCA (leaf cover
Agriculture 2020, 10, 462 19 of 27

area) [123]. The data accuracy meets the demand for precision agriculture practices, but still needs to
improve the reconstruction accuracy in fine phenotypic analysis and texture research.
The process of plant 3D data capture is easily affected by light intensity, blurred edges, wind
factors, etc., which lead to data loss or low quality, affecting the segmentation of plants and background.
When the plant structure cannot be completely reconstructed, the reconstruction accuracy is reduced.
Although when structured light and ToF camera avoid the condition being indoors, having high
measuring speed, and strong robustness with a no-movement plant, the major weakness is the existing
high noise among 3D data, which is a challenge for plant segmentation. For individual plant organ
segmentation, there is no unified and standard methods, which largely vary according to diffident
plant morphology. Existing methods based on machine learning can achieve good results, but require
manual participation and cannot provide automatic segmentation.

4.4. High Equipment Collection Cost


The current limitation of the broad-scale plant detection is that it relies on a relatively expensive
robotic platform and positioning system. The commercial possibilities of a scout robot are better since
the robot’s task can be executed while navigating when the automatic data processing can be carried
out. As LiDAR [130], light field camera [131], high-precision TOF cameras, and other instruments
are expensive, they are suitable only for laboratory research and large-scale facilities and agricultural
sites. They are currently in the pilot stage, but manual operation is often needed and the promotion is
limited due to funding problems [132]. Although the cost of applying SfM photogrammetry is lower,
generating more detailed models will increase time required and costs. For broad-scale plant detection
of large farms or forests, airspace carrying devices including unmanned aerial vehicles (UAV) or farm
helicopter transport is necessary, which adds the extra cost.

5. Prospection

5.1. Establishing a Standard System of 3D Plant Canopy Structure Data


A future research direction should go into automating the manual estimations by automatically
setting the point density parameter in order to avoid manual trimming. Additionally, more research
needs to be done with the leaf area index (LAI) parameter estimation. High-throughput phenotyping
for large greenhouses and open fields (if the measurements are performed on cloudy or low sunlight
intensity days) is a future application for the analysis system. Phenotypical analysts have introduced
the canopy structure index into various agricultural professional models to match plant physiological
data and improve the international universality of agricultural professional models.
Due to the significant differences in the different plant characteristics on different scales, it is
possible to refine the plant species as a unit on multiple scales such as organ, individual, or population,
and consider the top-level design principles of 3D structure analysis of plant canopies. The top-level
design principles include related terminology categories, detection schemes, technical standards,
technical methods, models for obtaining and using relevant data, and the representation and verification
procedures of the relationship between various data.

5.2. Speeding Up the 3D Plant Canopy Structure Reconstruction


In the different methods used to study plant phenotype, the effects of image preprocessing and
scaling on image registration accuracy can be studied [133] to reduce lighting interference, background
interference, image distortion, and other problems, and then improve the matching degree of plant
reconstruction and enhance the algorithm robustness. If distributed computing can be combined with
computer cluster computing [134], the reconstruction algorithm could be sped up, and performing
distributed optimization on the algorithm could also improve the calculation accuracy and reduce
the calculation time. Clustering algorithm mainly applies in point clouds processing of background
subtraction and outlier removal, along with surface feature-based segmentation.
Agriculture 2020, 10, 462 20 of 27

In the construction of the collection device platform, the UAV is a type of remote sensing platform
that is unmanned and reusable. After being equipped with a 3D canopy shape collection device,
the UAV could provide rapid collection, flexible movement, and convenient control. Especially with
the miniaturization of the 3D shape collection device, UAVs can acquire visible or near-infrared images,
3D point cloud images, multispectral images, and remote sensing images with high spatial resolution
at any time. It is possible to construct a 4D space–time scene of farmland based on UAV remote sensing
images through real-time data collection to achieve cross-fusion of time series and spatial images [135].

5.3. Improving the Accuracy of the 3D Structure Index of Canopy Reconstruction


3D plant canopy structure measurement technology can be embedded in phenotypical analysis
tools. Sensor fusion technology can be used to quantify 3D canopy structure and single leaf shape
features by integrating multiple features to improve the accuracy of the structure index. The color,
depth, and infrared data included in the image can be combined to improve the integrity of the plant
phenotypical data and improve the 3D reconstruction effect. Using multiple devises working together
to obtain point clouds from multi-view can reduce noise and improve reconstruction accuracy.
Optimizing the segmentation algorithm parameters to support a wider range of plant species
with less parameter tuning is important to improve plant structure index extraction accuracy.
Neural networks can be used for classification of segmentation. Deep learning on point clouds
is still at the forefront of research. Multi-view convolutional neural networks (CNNs) have tried
to render 3D point cloud into 2D images and then apply 2D conv nets to classify them, which can
make shape classification, but it cannot achieve 3D tasks such as point classification and shape
completion [129]. Feature-based deep convolutional neural networks (DNNs) firstly convert the 3D
data into a vector, by extracting traditional shape features and then use a fully connected net to classify
the shape, but they are constrained by the representation power of the features extracted [63]. Qi [136]
proposed a novel deep neural network called PointNet, it can achieve point classification or semantic
segmentation with a 1080X GPU. In conclusion, integrating the local and global features extracted by
deep learning models with the spatial representation of the point clouds will be useful to design a
model for plant canopy segmentation with top performance, but at present its segmentation quality
is low as a result of point clouds being irregular and sparse. The promising solutions are improving
multi-scale point clouds resolution, developing the architectures of the deep learning models like those
in RGB images, and improving the processing raw point clouds based on zero-shot learning [137].

Author Contributions: J.W. conceived of the idea and supervised the research and manuscript drafts. Y.Z.
contributed to literature search, study design, data collection, data analysis, and the manuscript drafts. R.G.
improved the writing of this manuscript and contributed to literature search, study design, and literature search.
All authors have read and agreed to the published version of the manuscript.
Funding: This work was supported by the Funding for Key R&D Programs in Jiangsu Province(BE2018321);
The Major Natural Science Research Project of Jiangsu Education Department (17KJA416002); Natural Science
Foundation of Huai’an in Jiangsu Province (HABZ201921); Jiangsu Postgraduate Cultivation Innovation
Engineering Graduate Research and Practice Innovation Program (SJCX18_0744, SJCX20_1419), and Jiangsu
Provincial University Superior Discipline Construction Engineering Project.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Li, D.; Yang, H. State-of-the-art review for internet of things in agriculture. Trans. Chin. Soc. Agric. Mach.
2018, 49, 1–20.
2. Rahman, A.; Mo, C.; Cho, B.-K. 3-D image reconstruction techniques for plant and animal morphological
analysis—A review. J. Biosyst. Eng. 2017, 42, 339–349.
3. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using
off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [CrossRef]
Agriculture 2020, 10, 462 21 of 27

4. Faugeras, O.; Toscani, G. Camera calibration for 3D computer vision. In Proceedings of the International
Workshop on Industrial Application of Machine Vision and Machine Intelligence, Tokyo, Japan,
2–5 February 1987; pp. 240–247.
5. Martins, H.; Birk, J.R.; Kelley, R.B. Camera models based on data from two calibration planes. Comput. Gr.
Image Process. 1981, 17, 173–180. [CrossRef]
6. Pollastri, F. Projection center calibration by motion. Pattern Recognit. Lett. 1993, 14, 975–983. [CrossRef]
7. Caprile, B.; Torre, V. Using vanishing points for camera calibration. Int. J. Comput. Vis. 1990, 4, 127–139.
[CrossRef]
8. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22,
1330–1334. [CrossRef]
9. Qi, W.; Li, F.; Zhenzhong, L. Review on camera calibration. In Proceedings of the 2010 Chinese Control and
Decision Conference, Xuzhou, China, 26–28 May 2010; pp. 3354–3358.
10. Andersen, H.J.; Reng, L.; Kirk, K. Geometric plant properties by relaxed stereo vision using simulated
annealing. Comput. Electron. Agric. 2005, 49, 219–232. [CrossRef]
11. Malekabadi, A.J.; Khojastehpour, M.; Emadi, B. Disparity map computation of tree using stereo vision system
and effects of canopy shapes and foliage density. Comput. Electron. Agric. 2019, 156, 627–644. [CrossRef]
12. Li, L.; Yu, X.; Zhang, S.; Zhao, X.; Zhang, L. 3D cost aggregation with multiple minimum spanning trees for
stereo matching. Appl. Opt. 2017, 56, 3411–3420. [CrossRef]
13. Bao, Y.; Tang, L.; Breitzman, M.W.; Salas Fernandez, M.G.; Schnable, P.S. Field-based robotic phenotyping of
sorghum plant architecture using stereo vision. J. Field Robot. 2019, 36, 397–415. [CrossRef]
14. Baweja, H.S.; Parhar, T.; Mirbod, O.; Nuske, S. Stalknet: A deep learning pipeline for high-throughput
measurement of plant stalk count and stalk width. In Field and Service Robotics; Springer: Cham, Switzerland,
2018; pp. 271–284.
15. Dandrifosse, S.; Bouvry, A.; Leemans, V.; Dumont, B.; Mercatoris, B. Imaging wheat canopy through stereo
vision: Overcoming the challenges of the laboratory to field transition for morphological features extraction.
Front. Plant Sci. 2020, 11, 96. [CrossRef] [PubMed]
16. Vázquez-Arellano, M.; Griepentrog, H.W.; Reiser, D.; Paraforos, D.S. 3-D imaging systems for agricultural
applications—A review. Sensors 2016, 16, 618. [CrossRef] [PubMed]
17. Chen, C.; Zheng, Y.F. Passive and active stereo vision for smooth surface detection of deformed plates.
IEEE Trans. Ind. Electron. 1995, 42, 300–306. [CrossRef]
18. Jin, H.; Soatto, S.; Yezzi, A.J. Multi-view stereo reconstruction of dense shape and complex appearance. Int. J.
Comput. Vis. 2005, 63, 175–189. [CrossRef]
19. Smith, M.; Carrivick, J.; Quincey, D. Structure from motion photogrammetry in physical geography. Prog.
Phys. Geogr. 2016, 40, 247–275. [CrossRef]
20. Malambo, L.; Popescu, S.C.; Murray, S.C.; Putman, E.; Pugh, N.A.; Horne, D.W.; Richardson, G.; Sheridan, R.;
Rooney, W.L.; Avant, R.; et al. Multitemporal field-based plant height estimation using 3D point clouds
generated from small unmanned aerial systems high-resolution imagery. Int. J. Appl. Earth Obs. Geoinf.
2018, 64, 31–42. [CrossRef]
21. Tsai, M.; Chiang, K.; Huang, Y.; Lin, Y.; Tsai, J.; Lo, C.; Lin, Y.; Wu, C. The development of a direct
georeferencing ready UAV based photogrammetry platform. In Proceedings of the 2010 Canadian Geomatics
Conference and Symposium of Commission I, Calgary, AB, Canada, 15–18 June 2010.
22. Turner, D.; Lucieer, A.; Wallace, L. Direct georeferencing of ultrahigh-resolution UAV imagery. IEEE Trans.
Geosci. Remote Sens. 2013, 52, 2738–2745. [CrossRef]
23. Rose, J.; Paulus, S.; Kuhlmann, H. Accuracy analysis of a multi-view stereo approach for phenotyping of
tomato plants at the organ level. Sensors 2015, 15, 9651–9665. [CrossRef]
24. Süss, A.; Nitta, C.; Spickermann, A.; Durini, D.; Varga, G.; Jung, M.; Brockherde, W.; Hosticka, B.J.; Vogt, H.;
Schwope, S. Speed considerations for LDPD based time-of-flight CMOS 3D image sensors. In Proceedings of
the 2013 the ESSCIRC (ESSCIRC), Bucharest, Romania, 16–20 September 2013; pp. 299–302.
25. Iddan, G.J.; Yahav, G. Three-dimensional imaging in the studio and elsewhere. In Proceedings of the
Three-Dimensional Image Capture and Applications IV, San Jose, CA, USA, 13 April 2001; pp. 48–55.
26. Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11,
1917–1926. [CrossRef]
Agriculture 2020, 10, 462 22 of 27

27. Hu, Y.; Wang, L.; Xiang, L.; Wu, Q.; Jiang, H. Automatic non-destructive growth measurement of leafy
vegetables based on kinect. Sensors 2018, 18, 806. [CrossRef] [PubMed]
28. Si, Y.; Wanlin, G.; Jiaqi, M.; Mengliu, W.; Minjuan, W.; Lihua, Z. Method for measurement of vegetable
seedlings height based on RGB-D camera. Trans. Chin. Soc. Agric. Mach. 2019, 50, 128–135.
29. Vázquez-Arellano, M.; Paraforos, D.S.; Reiser, D.; Garrido-Izard, M.; Griepentrog, H.W. Determination of
stem position and height of reconstructed maize plants using a time-of-flight camera. Comput. Electron.
Agric. 2018, 154, 276–288. [CrossRef]
30. Bao, Y.; Tang, L.; Srinivasan, S.; Schnable, P.S. Field-based architectural traits characterisation of maize plant
using time-of-flight 3D imaging. Biosyst. Eng. 2019, 178, 86–101. [CrossRef]
31. Liu, S.; Yao, J.; Li, H.; Qiu, C.; Liu, R. Research on 3D skeletal model extraction algorithm of branch based on
SR4000. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2019; p. 022059.
32. Hu, C.; Li, P.; Pan, Z. Phenotyping of poplar seedling leaves based on a 3D visualization method. Int. J.
Agric. Biol. Eng. 2018, 11, 145–151. [CrossRef]
33. Kadambi, A.; Bhandari, A.; Raskar, R. 3d depth cameras in vision: Benefits and limitations of the hardware.
In Computer Vision and Machine Learning with RGB-D Sensors; Springer: Cham, Switzerland, 2014; pp. 3–26.
34. Verbyla, D.L. Satellite Remote Sensing of Natural Resources; CRC Press: Cleveland, OH, USA, 1995; Volume 4.
35. Garrido, M.; Paraforos, D.S.; Reiser, D.; Vázquez Arellano, M.; Griepentrog, H.W.; Valero, C. 3D maize plant
reconstruction based on georeferenced overlapping LiDAR point clouds. Remote Sens. 2015, 7, 17077–17096.
[CrossRef]
36. Shen, D.A.Y.; Liu, H.; Hussain, F. A lidar-based tree canopy detection system development. In Proceedings
of the 2018 the 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 10361–10366.
37. Shen, Y.; Addis, D.; Liu, H.; Hussain, F. A LIDAR-Based Tree Canopy Characterization under Simulated
Uneven Road Condition: Advance in Tree Orchard Canopy Profile Measurement. J. Sens. 2017, 2017, 8367979.
[CrossRef]
38. Yuan, H.; Bennett, R.S.; Wang, N.; Chamberlin, K.D. Development of a peanut canopy measurement system
using a ground-based lidar sensor. Front. Plant Sci. 2019, 10, 203. [CrossRef]
39. Qiu, Q.; Sun, N.; Wang, Y.; Fan, Z.; Meng, Z.; Li, B.; Cong, Y. Field-based high-throughput phenotyping for
Maize plant using 3D LiDAR point cloud generated with a “Phenomobile”. Front. Plant Sci. 2019, 10, 554.
[CrossRef]
40. Jin, S.; Su, Y.; Wu, F.; Pang, S.; Gao, S.; Hu, T.; Liu, J.; Guo, Q. Stem–leaf segmentation and phenotypic trait
extraction of individual maize using terrestrial LiDAR data. IEEE Trans. Geosci. Remote Sens. 2018, 57,
1336–1346. [CrossRef]
41. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128–160. [CrossRef]
42. Chéné, Y.; Rousseau, D.; Lucidarme, P.; Bertheloot, J.; Caffier, V.; Morel, P.; Belin, É.; Chapeau-Blondeau, F.
On the use of depth camera for 3D phenotyping of entire plants. Comput. Electron. Agric. 2012, 82, 122–127.
[CrossRef]
43. Azzari, G.; Goulden, M.L.; Rusu, R.B. Rapid characterization of vegetation structure with a Microsoft Kinect
sensor. Sensors 2013, 13, 2384–2398. [CrossRef] [PubMed]
44. Nguyen, T.; Slaughter, D.; Max, N.; Maloof, J.; Sinha, N. Structured light-based 3D reconstruction system for
plants. Sensors 2015, 15, 18587–18612. [CrossRef]
45. Syed, T.N.; Jizhan, L.; Xin, Z.; Shengyi, Z.; Yan, Y.; Mohamed, S.H.A.; Lakhiar, I.A. Seedling-lump integrated
non-destructive monitoring for automatic transplanting with Intel RealSense depth camera. Artif. Intell.
Agric. 2019, 3, 18–32. [CrossRef]
46. Vit, A.; Shani, G. Comparing RGB-D sensors for close range outdoor agricultural phenotyping. Sensors
2018, 18, 4413. [CrossRef]
47. Liu, J.; Yuan, Y.; Zhou, Y.; Zhu, X.; Syed, T.N. Experiments and analysis of close-shot identification of
on-branch citrus fruit with realsense. Sensors 2018, 18, 1510. [CrossRef]
48. Milella, A.; Marani, R.; Petitti, A.; Reina, G. In-field high throughput grapevine phenotyping with a
consumer-grade depth camera. Comput. Electron. Agric. 2019, 156, 293–306. [CrossRef]
49. Perez-Sanz, F.; Navarro, P.J.; Egea-Cortines, M. Plant phenomics: An overview of image acquisition
technologies and image data analysis algorithms. GigaScience 2017, 6, gix092. [CrossRef]
Agriculture 2020, 10, 462 23 of 27

50. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. Structure-from-Motion
photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314.
[CrossRef]
51. Klose, R.; Penlington, J.; Ruckelshausen, A. Usability study of 3D time-of-flight cameras for automatic plant
phenotyping. Bornimer Agrartech. Ber. 2009, 69, 12.
52. Ma, X.; Zhu, K.; Guan, H.; Feng, J.; Yu, S.; Liu, G. High-throughput phenotyping analysis of potted soybean
plants using colorized depth images based on a proximal platform. Remote Sens. 2019, 11, 1085. [CrossRef]
53. Sun, G.; Wang, X. Three-dimensional point cloud reconstruction and morphology measurement method for
greenhouse plants based on the kinect sensor self-calibration. Agronomy 2019, 9, 596. [CrossRef]
54. Paproki, A.; Sirault, X.; Berry, S.; Furbank, R.; Fripp, J. A novel mesh processing based technique for 3D plant
analysis. BMC Plant Biol. 2012, 12, 63. [CrossRef] [PubMed]
55. Scharr, H.; Briese, C.; Embgenbroich, P.; Fischbach, A.; Fiorani, F.; Müller-Linow, M. Fast high resolution
volume carving for 3D plant shoot reconstruction. Front. Plant Sci. 2017, 8, 1680. [CrossRef]
56. Kumar, P.; Connor, J.; Mikiavcic, S. High-throughput 3D reconstruction of plant shoots for phenotyping.
In Proceedings of the 2014 13th International Conference on Control Automation Robotics and Vision
(ICARCV), Singapore, 10–12 December 2014; pp. 211–216.
57. Gibbs, J.A.; Pound, M.; French, A.P.; Wells, D.M.; Murchie, E.; Pridmore, T. Plant phenotyping: An active
vision cell for three-dimensional plant shoot reconstruction. Plant Physiol. 2018, 178, 524–534. [CrossRef]
58. Neubert, B.; Franken, T.; Deussen, O. Approximate image-based tree-modeling using particle flows. In
Proceedings of the ACM SIGGRAPH 2007 Papers, San Diego, CA, USA, 5–9 August 2007.
59. Aggarwal, A.; Guibas, L.J.; Saxe, J.; Shor, P.W. A linear-time algorithm for computing the Voronoi diagram of
a convex polygon. Discret. Comput. Geom. 1989, 4, 591–604. [CrossRef]
60. Srihari, S.N. Representation of three-dimensional digital images. ACM Comput. Surv. 1981, 13, 399–424.
[CrossRef]
61. Vandenberghe, B.; Depuydt, S.; Van Messem, A. How to Make Sense of 3D Representations for Plant Phenotyping:
A Compendium of Processing and Analysis Techniques; OSF Preprints: Charlottesville, VA, USA, 2018. [CrossRef]
62. Klodt, M.; Herzog, K.; Töpfer, R.; Cremers, D. Field phenotyping of grapevine growth using dense stereo
reconstruction. BMC Bioinf. 2015, 16, 143. [CrossRef]
63. Guo, K.; Zou, D.; Chen, X. 3D mesh labeling via deep convolutional neural networks. ACM Trans. Graph.
2015, 35, 1–12. [CrossRef]
64. Gai, J.; Tang, L.; Steward, B. Plant recognition through the fusion of 2D and 3D images for robotic weeding.
In 2015 ASABE Annual International Meeting; American Society of Agricultural and Biological Engineers:
St. Joseph, MI, USA, 2015.
65. Andújar, D.; Dorado, J.; Fernández-Quintanilla, C.; Ribeiro, A. An approach to the use of depth cameras for
weed volume estimation. Sensors 2016, 16, 972. [CrossRef] [PubMed]
66. Mitra, N.J.; Nguyen, A. Estimating surface normals in noisy point cloud data. In Proceedings of the Nineteenth
Annual Symposium on Computational Geometry, San Diego, CA, USA, 8–10 June 2003; pp. 322–328.
67. Hawkins, D.M. Identification of Outliers; Springer: Cham, Switzerland, 1980; Volume 11.
68. Johnson, T.; Kwok, I.; Ng, R.T. Fast Computation of 2-Dimensional Depth Contours. In KDD; Citeseer:
Princeton, NJ, USA, 1998; pp. 224–228.
69. Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. 1999, 31, 264–323.
[CrossRef]
70. Knorr, E.M.; Ng, R.T.; Tucakov, V. Distance-based outliers: Algorithms and applications. VLDB J. 2000, 8,
237–253. [CrossRef]
71. Breunig, M.M.; Kriegel, H.-P.; Ng, R.T.; Sander, J. LOF: Identifying density-based local outliers. In Proceedings
of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, TX, USA, 16–18 May 2000;
pp. 93–104.
72. Fleishman, S.; Cohen-Or, D.; Silva, C.T. Robust moving least-squares fitting with sharp features. ACM Trans.
Graph. 2005, 24, 544–552. [CrossRef]
73. Wu, J.; Xue, X.; Zhang, S.; Qin, W.; Chen, C.; Sun, T. Plant 3D reconstruction based on LiDAR and multi-view
sequence images. Int. J. Precis. Agric. Aviat. 2018, 1. [CrossRef]
Agriculture 2020, 10, 462 24 of 27

74. Wolff, K.; Kim, C.; Zimmer, H.; Schroers, C.; Botsch, M.; Sorkine-Hornung, O.; Sorkine-Hornung, A.
Point cloud noise and outlier removal for image-based 3D reconstruction. In Proceedings of the 2016 the
Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 118–127.
75. Xia, C.; Shi, Y.; Yin, W. Obtaining and denoising method of three-dimensional point cloud data of plants
based on TOF depth sensor. Trans. Chin. Soc. Agric. Eng. 2018, 34, 168–174.
76. Zhou, Z.; Chen, B.; Zheng, G.; Wu, B.; Miao, X.; Yang, D.; Xu, C. Measurement of vegetation phenotype based
on ground-based lidar point cloud. J. Ecol. 2020, 39, 308–314.
77. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV:
Control Paradigms and Data Structures, Boston, MA, USA, 30 April 1992; pp. 586–606.
78. Jian, B.; Vemuri, B.C. A robust algorithm for point set registration using mixture of Gaussians. In Proceedings
of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005;
Volume 1, pp. 1246–1251.
79. Chui, H.; Rangarajan, A. A new point matching algorithm for non-rigid registration. Comput. Vis. Image
Underst. 2003, 89, 114–141. [CrossRef]
80. Jia, H.; Meng, Y.; Xing, Z.; Zhu, B.; Peng, X.; Ling, J. 3D model reconstruction of plants based on point cloud
stitching. Appl. Sci. Technol. 2019, 46, 19–24.
81. Boissonnat, J.-D. Geometric structures for three-dimensional shape representation. ACM Trans. Graph.
1984, 3, 266–286. [CrossRef]
82. Curless, B.; Levoy, M. A volumetric method for building complex models from range images. In Proceedings
of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA,
4–9 August 1996; pp. 303–312.
83. Edelsbrunner, H.; Mücke, E.P. Three-dimensional alpha shapes. ACM Trans. Graph. 1994, 13, 43–72.
[CrossRef]
84. Amenta, N.; Choi, S.; Dey, T.K.; Leekha, N. A simple algorithm for homeomorphic surface reconstruction.
In Proceedings of the Sixteenth Annual Symposium on Computational Geometry, Kowloon, Hong Kong,
China, 12–14 June 2000; pp. 213–222.
85. Forero, M.G.; Gomez, F.A.; Forero, W.J. Reconstruction of surfaces from points-cloud data using Delaunay
triangulation and octrees. In Proceedings of the Vision Geometry XI, Seattle, WA, USA, 24 November 2002;
pp. 184–194.
86. Liang, J.; Park, F.; Zhao, H. Robust and efficient implicit surface reconstruction for point clouds based on
convexified image segmentation. J. Sci. Comput. 2013, 54, 577–602. [CrossRef]
87. Carr, J.C.; Beatson, R.K.; Cherrie, J.B.; Mitchell, T.J.; Fright, W.R.; McCallum, B.C.; Evans, T.R. Reconstruction
and representation of 3D objects with radial basis functions. In Proceedings of the 28th Annual ACM
Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 12–17 August 2001;
pp. 67–76.
88. Alexa, M.; Behr, J.; Cohen-Or, D.; Fleishman, S.; Levin, D.; Silva, C.T. Point set surfaces. In Proceedings of the
IEEE Conference on Visualization ’01, San Diego, CA, USA, 21–26 October 2001; pp. 21–28.
89. Ohtake, Y.; Belyaev, A.; Alexa, M.; Turk, G.; Seidel, H.-P. Multi-level partition of unity implicits. In ACM
Siggraph 2005 Courses; Association for Computing Machinery: New York, NY, USA, 2005.
90. Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson surface reconstruction. In Proceedings of the Fourth Eurographics
Symposium on Geometry Processing, Cagliari, Sardinia, Italy, 26–28 June 2006; Eurographics Association:
Goslar, Germany, 2006; pp. 60–66.
91. Boissonnat, J.-D.; Flototto, J. A local coordinate system on a surface. In Proceedings of the Seventh ACM
Symposium on Solid Modeling and Applications, Saarbrücken, Germany, 17–21 June 2002; pp. 116–126.
92. Jay, S.; Rabatel, G.; Hadoux, X.; Moura, D.; Gorretta, N. In-field crop row phenotyping from 3D modeling
performed using Structure from Motion. Comput. Electron. Agric. 2015, 110, 70–77. [CrossRef]
93. Andújar, D.; Ribeiro, A.; Fernández-Quintanilla, C.; Dorado, J. Using depth cameras to extract structural
parameters to assess the growth state and yield of cauliflower crops. Comput. Electron. Agric. 2016, 122,
67–73. [CrossRef]
94. Martinez-Guanter, J.; Ribeiro, Á.; Peteinatos, G.G.; Pérez-Ruiz, M.; Gerhards, R.; Bengochea-Guevara, J.M.;
Machleb, J.; Andújar, D. Low-cost three-dimensional modeling of crop plants. Sensors 2019, 19, 2883.
[CrossRef] [PubMed]
Agriculture 2020, 10, 462 25 of 27

95. Hu, P.; Guo, Y.; Li, B.; Zhu, J.; Ma, Y. Three-dimensional reconstruction and its precision evaluation of plant
architecture based on multiple view stereo method. Trans. Chin. Soc. Agric. Eng. 2015, 31, 209–214.
96. Pound, M.P.; French, A.P.; Murchie, E.H.; Pridmore, T.P. Automated recovery of three-dimensional models of
plant shoots from multiple color images. Plant. Physiol. 2014, 166, 1688–1698. [CrossRef]
97. Kato, A.; Schreuder, G.F.; Calhoun, D.; Schiess, P.; Stuetzle, W. Digital surface model of tree canopy structure
from LIDAR data through implicit surface reconstruction. In Proceedings of the ASPRS 2007 Annual Conference,
Tampa, FL, USA, 7–11 May 2007; Citeseer: Princeton, NJ, USA, 2007.
98. Tahir, R.; Heuvel, F.V.D.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. Int.
Arch. Photogramm. Remote Sens. SPATIAL Inf. Sci. 2006, 36, 248–253.
99. Ng, A.Y.; Jordan, M.I.; Weiss, Y. On spectral clustering: Analysis and an algorithm. In Advances in Neural
Information Processing Systems; MIT Press: Cambridge, MA, USA, 2002; pp. 849–856.
100. Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature
histograms. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems,
Nice, France, 22–26 September 2008; pp. 3384–3391.
101. Mehnert, A.; Jackway, P. An improved seeded region growing algorithm. Pattern Recognit. Lett. 1997, 18,
1065–1071. [CrossRef]
102. Paulus, S.; Dupuis, J.; Mahlein, A.-K.; Kuhlmann, H. Surface feature based classification of plant organs from
3D laserscanned point clouds for plant phenotyping. BMC Bioinf. 2013, 14, 238. [CrossRef]
103. Li, D.; Cao, Y.; Tang, X.-S.; Yan, S.; Cai, X. Leaf segmentation on dense plant point clouds with facet region
growing. Sensors 2018, 18, 3625. [CrossRef] [PubMed]
104. Dey, D.; Mummert, L.; Sukthankar, R. Classification of plant structures from uncalibrated image sequences.
In Proceedings of the 2012 IEEE Workshop on the Applications of Computer Vision (WACV), Breckenridge,
CO, USA, 9–11 January 2012; pp. 329–336.
105. Lalonde, J.F.; Vandapel, N.; Huber, D.F.; Hebert, M. Natural terrain classification using three-dimensional
ladar data for ground robot mobility. J. Field Robot. 2006, 23, 839–861. [CrossRef]
106. Gélard, W.; Herbulot, A.; Devy, M.; Debaeke, P.; McCormick, R.F.; Truong, S.K.; Mullet, J. Leaves segmentation
in 3d point cloud. In International Conference on Advanced Concepts for Intelligent Vision Systems; Springer:
Cham, Switzerland, 2017; pp. 664–674.
107. Piegl, L.; Tiller, W. Symbolic operators for NURBS. Comput. Aided Design 1997, 29, 361–368. [CrossRef]
108. Santos, T.T.; Koenigkan, L.V.; Barbedo, J.G.A.; Rodrigues, G.C. 3D plant modeling: Localization, mapping
and segmentation for plant phenotyping using a single hand-held camera. In European Conference on Computer
Vision; Springer: Cham, Switzerland, 2014; pp. 247–263.
109. Müller-Linow, M.; Pinto-Espinosa, F.; Scharr, H.; Rascher, U. The leaf angle distribution of natural plant
populations: Assessing the canopy with a novel software tool. Plant Methods 2015, 11, 11. [CrossRef]
110. Zhu, B.; Liu, F.; Zhu, J.; Guo, Y.; Ma, Y. Three-dimensional quantifications of plant growth dynamics in
field-grown plants based on machine vision method. Trans. Chin. Soc. Agric. Mach. 2018, 49, 256–262.
111. Sodhi, P.; Hebert, M.; Hu, H. In-Field Plant Phenotyping Using Model-Free and Model-Based Methods.
Master’s Thesis, Carnegie Mellon University Pittsburgh, Pittsburgh, PA, USA, 2017.
112. Hosoi, F.; Omasa, K. Estimating vertical plant area density profile and growth parameters of a wheat canopy
at different growth stages using three-dimensional portable lidar imaging. ISPRS J. Photogramm. Remote Sens.
2009, 64, 151–158. [CrossRef]
113. Cornea, N.D.; Silver, D.; Min, P. Curve-skeleton properties, applications, and algorithms. IEEE Trans. Vis.
Comput. Graph. 2007, 13, 530. [CrossRef]
114. Biskup, B.; Scharr, H.; Schurr, U.; Rascher, U. A stereo imaging system for measuring structural parameters
of plant canopies. Plant Cell Environ. 2007, 30, 1299–1308. [CrossRef]
115. Weiss, M.; Baret, F.; Smith, G.; Jonckheere, I.; Coppin, P. Review of methods for in situ leaf area index (LAI)
determination: Part II. Estimation of LAI, errors and sampling. Agric. For. Meteorol. 2004, 121, 37–53.
[CrossRef]
116. Hosoi, F.; Omasa, K. Voxel-based 3-D modeling of individual trees for estimating leaf area density using
high-resolution portable scanning lidar. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3610–3618. [CrossRef]
117. Liang, J.; Edelsbrunner, H.; Fu, P.; Sudhakar, P.V.; Subramaniam, S. Analytical shape computation of
macromolecules: I. Molecular area and volume through alpha shape. Proteins Struct. Function Bioinf. 1998, 33,
1–17. [CrossRef]
Agriculture 2020, 10, 462 26 of 27

118. Chalidabhongse, T.; Yimyam, P.; Sirisomboon, P. 2D/3D vision-based mango’s feature extraction and sorting.
In Proceedings of the 2006 the 9th International Conference on Control, Automation, Robotics and Vision,
Singapore, 5–8 December 2006; pp. 1–6.
119. SANTOS, T.; Ueda, J. Automatic 3D plant reconstruction from photographies, segmentation and classification
of leaves and internodes using clustering. In Embrapa Informática Agropecuária-Resumo em anais de congresso
(ALICE); Finnish Society of Forest Science: Vantaa, Finland, 2013.
120. Embgenbroich, P. Bildbasierte Entwicklung Eines Dreidimensionalen Pflanzenmodells am Beispiel von Zea
Mays. Master’s Thesis, Helmholtz Association of German Research Centers, Berlin, Germany, 2015.
121. Lorensen, W.E.; Cline, H.E. Marching cubes: A high resolution 3D surface construction algorithm.
ACM Siggraph Comput. Graph. 1987, 21, 163–169. [CrossRef]
122. Song, Y.; Maki, M.; Imanishi, J.; Morimoto, Y. Voxel-based estimation of plant area density from airborne
laser scanner data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, W12. [CrossRef]
123. Lati, R.N.; Filin, S.; Eizenberg, H. Plant growth parameter estimation from sparse 3D reconstruction based on
highly-textured feature points. Precis. Agric. 2013, 14, 586–605. [CrossRef]
124. Itakura, K.; Hosoi, F. Automatic leaf segmentation for estimating leaf area and leaf inclination angle in 3D
plant images. Sensors 2018, 18, 3576. [CrossRef]
125. Zhao, C. Big data of plant phenomics and its research progress. J. Agric. Big Data 2019, 1, 5–14.
126. Zhou, J.; Guo, X.; Wu, S.; Du, J.; Zhao, C. Research progress on 3D reconstruction of plants based on
multi-view images. China Agric. Sci. Technol. Rev. 2018, 21, 9–18.
127. Marton, Z.C.; Rusu, R.B.; Beetz, M. On fast surface reconstruction methods for large and noisy point
clouds. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan,
12–17 May 2009; pp. 3218–3223.
128. Lou, L.; Liu, Y.; Han, J.; Doonan, J.H. Accurate multi-view stereo 3D reconstruction for cost-effective plant
phenotyping. In International Conference Image Analysis and Recognition; Springer: Cham, Switzerland, 2014;
pp. 349–356.
129. Apelt, F.; Breuer, D.; Nikoloski, Z.; Stitt, M.; Kragler, F. Phytotyping4D: A light-field imaging system for
non-invasive and accurate monitoring of spatio-temporal plant growth. Plant J. 2015, 82, 693–706. [CrossRef]
130. Itakura, K.; Hosoi, F. Estimation of leaf inclination angle in three-dimensional plant images obtained from
lidar. Remote Sens. 2019, 11, 344. [CrossRef]
131. Zhao, J.; Liu, Z.; Guo, B. Three-dimensional digital image correlation method based on a light field camera.
Opt. Lasers Eng. 2019, 116, 19–25. [CrossRef]
132. Hu, Y. Research on Three-Dimensional Reconstruction and Growth Measurement of Leafy Crops based on Depth
Camera; Zhejiang University: Hangzhou, China, 2018.
133. Henke, M.; Junker, A.; Neumann, K.; Altmann, T.; Gladilin, E. Automated alignment of multi-modal plant
images using integrative phase correlation approach. Front. Plant Sci. 2018, 9, 1519. [CrossRef] [PubMed]
134. Myint, K.N.; Aung, W.T.; Zaw, M.H. Research and analysis of parallel performance with MPI odd-even
sorting algorithm on super cheap computing cluster. In Seventeenth International Conference on Computer
Applications; University of Computer Studies, Yangon under Ministry of Education: Yangon, Myanmar, 2019;
pp. 99–106.
135. Dong, J.; Burnham, J.G.; Boots, B.; Rains, G.; Dellaert, F. 4D crop monitoring: Spatio-temporal reconstruction
for agriculture. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation
(ICRA), Singapore, 29 May–3 June 2017; pp. 3878–3885.
Agriculture 2020, 10, 462 27 of 27

136. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and
segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
Honolulu, HI, USA, 21–26 July 2017; pp. 652–660.
137. Liu, W.; Sun, J.; Li, W.; Hu, T.; Wang, P. Deep learning on point clouds and its application: A survey. Sensors
2019, 19, 4188. [CrossRef] [PubMed]

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional
affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like