UAS-Based Plant Phenotyping for Research and Breeding Applications

Read the full article

Journal profile

The open access journal Plant Phenomics, published in association with NAU, publishes novel research that advances plant phenotyping and connects phenomics with other research domains.

Editorial board

Plant Phenomics' editorial board is led by Seishi Ninomiya (University of Tokyo), Frédéric Baret (French National Institute of Agricultural Research), and Zong-Ming Cheng (Nanjing Agricultural University/University of Tennessee) and is comprised of leading experts in the field.

Latest Articles

More articles
Research Article

Impact of Varying Light and Dew on Ground Cover Estimates from Active NDVI, RGB, and LiDAR

Canopy ground cover (GC) is an important agronomic measure for evaluating crop establishment and early growth. This study evaluates the reliability of GC estimates, in the presence of varying light and dew on leaves, from three different ground-based sensors: (1) normalized difference vegetation index (NDVI) from the commercially available GreenSeeker®; (2) RGB images from a digital camera, where GC was determined as the portion of pixels from each image meeting a greenness criterion (i.e., ); and (3) LiDAR using two separate approaches: (a) GC from LiDAR red reflectance (whereby red reflectance less than five was classified as vegetation) and (b) GC from LiDAR height (whereby height greater than 10 cm was classified as vegetation). Hourly measurements were made early in the season at two different growth stages (tillering and stem elongation), among wheat genotypes highly diverse for canopy characteristics. The active NDVI showed the least variation through time and was particularly stable, regardless of the available light or the presence of dew. In addition, between-sample-time Pearson correlations for NDVI were consistently high and significant ( ), ranging from 0.89 to 0.98. In comparison, GC from LiDAR and RGB showed greater variation across sampling times, and LiDAR red reflectance was strongly influenced by the presence of dew. Excluding times when the light was exceedingly low, correlations between GC from RGB and NDVI were consistently high (ranging from 0.79 to 0.92). The high reliability of the active NDVI sensor potentially affords a high degree of flexibility for users by enabling sampling across a broad range of acceptable light conditions.

Research Article

Enhanced Field-Based Detection of Potato Blight in Complex Backgrounds Using Deep Learning

Rapid and automated identification of blight disease in potato will help farmers to apply timely remedies to protect their produce. Manual detection of blight disease can be cumbersome and may require trained experts. To overcome these issues, we present an automated system using the Mask Region-based convolutional neural network (Mask R-CNN) architecture, with residual network as the backbone network for detecting blight disease patches on potato leaves in field conditions. The approach uses transfer learning, which can generate good results even with small datasets. The model was trained on a dataset of 1423 images of potato leaves obtained from fields in different geographical locations and at different times of the day. The images were manually annotated to create over 6200 labeled patches covering diseased and healthy portions of the leaf. The Mask R-CNN model was able to correctly differentiate between the diseased patch on the potato leaf and the similar-looking background soil patches, which can confound the outcome of binary classification. To improve the detection performance, the original RGB dataset was then converted to HSL, HSV, LAB, XYZ, and YCrCb color spaces. A separate model was created for each color space and tested on 417 field-based test images. This yielded 81.4% mean average precision on the LAB model and 56.9% mean average recall on the HSL model, slightly outperforming the original RGB color space model. Manual analysis of the detection performance indicates an overall precision of 98% on leaf images in a field environment containing complex backgrounds.

Research Article

Automatic Fruit Morphology Phenome and Genetic Analysis: An Application in the Octoploid Strawberry

Automatizing phenotype measurement will decisively contribute to increase plant breeding efficiency. Among phenotypes, morphological traits are relevant in many fruit breeding programs, as appearance influences consumer preference. Often, these traits are manually or semiautomatically obtained. Yet, fruit morphology evaluation can be enhanced using fully automatized procedures and digital images provide a cost-effective opportunity for this purpose. Here, we present an automatized pipeline for comprehensive phenomic and genetic analysis of morphology traits extracted from internal and external strawberry (Fragaria x ananassa) images. The pipeline segments, classifies, and labels the images and extracts conformation features, including linear (area, perimeter, height, width, circularity, shape descriptor, ratio between height and width) and multivariate (Fourier elliptical components and Generalized Procrustes) statistics. Internal color patterns are obtained using an autoencoder to smooth out the image. In addition, we develop a variational autoencoder to automatically detect the most likely number of underlying shapes. Bayesian modeling is employed to estimate both additive and dominance effects for all traits. As expected, conformational traits are clearly heritable. Interestingly, dominance variance is higher than the additive component for most of the traits. Overall, we show that fruit shape and color can be quickly and automatically evaluated and are moderately heritable. Although we study strawberry images, the algorithm can be applied to other fruits, as shown in the GitHub repository.

Research Article

The Application of UAV-Based Hyperspectral Imaging to Estimate Crop Traits in Maize Inbred Lines

Crop traits such as aboveground biomass (AGB), total leaf area (TLA), leaf chlorophyll content (LCC), and thousand kernel weight (TWK) are important indices in maize breeding. How to extract multiple crop traits at the same time is helpful to improve the efficiency of breeding. Compared with digital and multispectral images, the advantages of high spatial and spectral resolution of hyperspectral images derived from unmanned aerial vehicle (UAV) are expected to accurately estimate the similar traits among breeding materials. This study is aimed at exploring the feasibility of estimating AGB, TLA, SPAD value, and TWK using UAV hyperspectral images and at determining the optimal models for facilitating the process of selecting advanced varieties. The successive projection algorithm (SPA) and competitive adaptive reweighted sampling (CARS) were used to screen sensitive bands for the maize traits. Partial least squares (PLS) and random forest (RF) algorithms were used to estimate the maize traits. The results can be summarized as follows: The sensitive bands for various traits were mainly concentrated in the near-red and red-edge regions. The sensitive bands screened by CARS were more abundant than those screened by SPA. For AGB, TLA, and SPAD value, the optimal combination was the CARS-PLS method. Regarding the TWK, the optimal combination was the CARS-RF method. Compared with the model built by RF, the model built by PLS was more stable. This study provides guiding significance and practical value for main trait estimation of maize inbred lines by UAV hyperspectral images at the plot level.

Database/Software Article

An Integrated Method for Tracking and Monitoring Stomata Dynamics from Microscope Videos

Patchy stomata are a common and characteristic phenomenon in plants. Understanding and studying the regulation mechanism of patchy stomata are of great significance to further supplement and improve the stomatal theory. Currently, the common methods for stomatal behavior observation are based on static images, which makes it difficult to reflect dynamic changes of stomata. With the rapid development of portable microscopes and computer vision algorithms, it brings new chances for stomatal movement observation. In this study, a stomatal behavior observation system (SBOS) was proposed for real-time observation and automatic analysis of each single stoma in wheat leaf using object tracking and semantic segmentation methods. The SBOS includes two modules: the real-time observation module and the automatic analysis module. The real-time observation module can shoot videos of stomatal dynamic changes. In the automatic analysis module, object tracking locates every single stoma accurately to obtain stomatal pictures arranged in time-series; semantic segmentation can precisely quantify the stomatal opening area (SOA), with a mean pixel accuracy (MPA) of 0.8305 and a mean intersection over union (MIoU) of 0.5590 in the testing set. Moreover, we designed a graphical user interface (GUI) so that researchers could use this automatic analysis module smoothly. To verify the performance of the SBOS, the dynamic changes of stomata were observed and analyzed under chilling. Finally, we analyzed the correlation between gas exchange and SOA under drought stress, and the correlation coefficients between mean SOA and net photosynthetic rate (Pn), intercellular CO2 concentration (Ci), stomatal conductance (Gs), and transpiration rate (Tr) are 0.93, 0.96, 0.96, and 0.97.

Research Article

Robust Surface Reconstruction of Plant Leaves from 3D Point Clouds

The automation of plant phenotyping using 3D imaging techniques is indispensable. However, conventional methods for reconstructing the leaf surface from 3D point clouds have a trade-off between the accuracy of leaf surface reconstruction and the method’s robustness against noise and missing points. To mitigate this trade-off, we developed a leaf surface reconstruction method that reduces the effects of noise and missing points while maintaining surface reconstruction accuracy by capturing two components of the leaf (the shape and distortion of that shape) separately using leaf-specific properties. This separation simplifies leaf surface reconstruction compared with conventional methods while increasing the robustness against noise and missing points. To evaluate the proposed method, we reconstructed the leaf surfaces from 3D point clouds of leaves acquired from two crop species (soybean and sugar beet) and compared the results with those of conventional methods. The result showed that the proposed method robustly reconstructed the leaf surfaces, despite the noise and missing points for two different leaf shapes. To evaluate the stability of the leaf surface reconstructions, we also calculated the leaf surface areas for 14 consecutive days of the target leaves. The result derived from the proposed method showed less variation of values and fewer outliers compared with the conventional methods.