Non Invasive Diagnosis of Nutrient Deficiencies in Winter Wheat and Winter Rye Using UAV Based RGB Images
Non Invasive Diagnosis of Nutrient Deficiencies in Winter Wheat and Winter Rye Using UAV Based RGB Images
Better matching of the timing and amount of fertilizer inputs to plant requirements will improve nutrient use efficiency and crop
yields and could reduce negative environmental impacts. Deep learning can be a powerful digital tool for on-site real-time non-
invasive diagnosis of crop nutrient deficiencies. A drone-based RGB image dataset was generated together with ground truthing
data in winter wheat (2020) and in winter rye (2021) during tillering and booting in the long-term fertilizer experiment (LTFE)
Dikopshof. In this LTFE, the crops are fertilized with the same amounts since decades. The selected treatments include full
fertilization including manure (NPKCa+m+s), mineral fertilization (NPKCa), mineral fertilization but no nitrogen (N) application
(_PKCa), no phosphorous (P) application (N_KCa), no potassium (K) application (NP_Ca), or no liming (Ca) (NPK_), as well as an
unfertilized treatment. The image dataset consisting of more than 3600 UAV-based RGB images was used to train and evaluate in
total eight CNN-based and transformerbased models as baselines within each crop-year and across the two crop-year combinations
aiming to detect the specific fertilizer treatments including the specific nutrient deficiencies. The field observations show a strong
biomass decline in the case of N omission and no fertilization though the effects are lower in the case of P, K, and lime omission.
The mean detection accuracy within one year was 75% (winter wheat) and 81% (winter rye) across models and treatments. Hereby,
the detection accuracy for winter wheat was highest for the NPKCa+m+s (100%) and the unfertilized (96%) treatments as well as
the _PKCa treatment (92%), whereas for treatments N_KCa and NPKCa the accuracy was lowest (about 50%). The results were
Builds on previous research
While the best network could recognize the symptoms of nutrient deficiencies with high accuracy if all development stages have
been observed during training, identifying nutrient deficiencies across crop development stages remained difficult. [41] applied a
large number of sequential images acquired with a camera from oilseed rape canopies at different growth stages and three N, P, and
K levels during a two-year field experiment. The model performance in a cross-year test was 92-93%. [40] used RGB images
captured by a UAV to detect iron deficiency chlorosis in soybean in three conditions to test model robustness namely different
soybean trials, field locations, and vegetative growth stages
To bridge this gap, domain adaptation methods try to learn domain-invariant representations to improve the generalization and
performance of the model in the target domain. In this work, we have applied different baseline methods of domain adaptation
[48,49,50,51,52,53,54,55] for cross-species and crossyear transfer
To bridge differences between training and test data, domain adaptation approaches, which use an annotated training dataset and
transfer the learned knowledge to an unseen dataset that is not annotated, can be applied [5]. We thus investigated several methods
for domain adaptation [48], including DAN [49], DANN [50], ADDA [51], CDAN [52], BSP [53], AFN [54] and ERM [55]
Differs from previous work
In general, correlations between the model-predicted scores and the visual scores varied widely (R2 of 0 to 88) depending on the
model and training/test setup. With 92-93%, the cross-year model performance presented in [41] was much higher than our cross-
year cross-crop model performance
Highlights
• A drone-based RGB image dataset was generated together with ground truthing data in winter wheat (2020) and in winter rye
(2021) during tillering and booting in the long-term fertilizer experiment (LTFE) Dikopshof. In this long-term field experiments
(LTFEs), the crops are fertilized with the same amounts since decades
• In most European countries, fertilizer recommendations for the plant macronutrients potassium (K) and phosphorous (P), which
are less mobile than N in the soil, are based on actual measurements of plantavailable P and K in the soil, expected yields, and
on experimental data from long-term field experiments (LTFEs) [2, 3]
• Three studies used images of real field data [5, 40, 41]. [5] used RGB camera images of sugar beets growing in one year on a
LTFE with various deficiency treatments and analyzed them with five convolutional neural networks (CNN) aiming to
recognize nutrient deficiency symptoms [5]
• P deficiency (N_KCa) and NPKCa are the most difficult to be recognized, which is similar to the results reported in [5]. This is
in line with [45] who reported that the effect of fertilizer omission on mean yield loss due to P omission (N_KCa) was low for
winter wheat and winter rye (7 - 8%) in a LTFE
• We introduced a novel dataset comprising RGB images captured by a Unmanned aerial vehicles (UAV) from winter wheat and
winter rye subjected to seven distinct nutrient treatments, offering a unique benchmark under field conditions
Summary
Applying nutrients to agricultural systems is critical to maximizing crop yields while minimizing the negative environmental
impacts of fertilizers, such as nitrous oxide emissions or groundwater pollution.
A drone-based RGB image dataset was generated together with ground truthing data in winter wheat (2020) and in winter rye
(2021) during tillering and booting in the long-term fertilizer experiment (LTFE) Dikopshof.
The image dataset consisting of more than 3600 UAV-based RGB images was used to train and evaluate in total eight CNN-based
and transformerbased models as baselines within each crop-year and across the two crop-year combinations aiming to detect the
specific fertilizer treatments including the specific nutrient deficiencies.
A model could be trained on the datasets using deep learning methods such as convolution neural networks (CNNs)
[20,21,22,23,24,25,26,27], transformers [28,29,30,31,32] and transfer learning [19, 33].
[5] used RGB camera images of sugar beets growing in one year on a LTFE with various deficiency treatments and analyzed them
with five convolutional neural networks (CNN) aiming to recognize nutrient deficiency symptoms [5].
[40] used RGB images captured by a UAV to detect iron deficiency chlorosis in soybean in three conditions to test model
robustness namely different soybean trials, field locations, and vegetative growth stages.
To test on-site non-invasive nutrient status diagnosis, we collected RGB images of cereal crops with a UAV in a field with nutrient
Study subjects
11 studies
The deficiency classification results ranged from 40 to about 100% for the presented eleven studies on various crops
Study analysis
ViT model
ViT is the original ViT model, and it uses a simple transformer architecture with a pre-trained token embedding
Xavier algorithm
If the networks were trained from scratch, the parameters of the neural networks were randomly initialized using the Xavier algorithm [57], otherwise, the models are pre-
trained on ImageNet
For the ground-truth data, a one-way analysis of variance (ANOVA) was conducted with the crops’ shoot biomass and plant height to compare the means of different treatments
Multiple comparisons between treatments were performed using Tukey’s test, and means with the same letter are considered not significantly different (Tukey test, P>0.05)
Future work
Another future research direction is the investigation of whether large-scale agricultural datasets provide a better source for pre-
training than commonly used image classification datasets.