Plant Phenomics Now Indexed in Scopus!
Readers can now find Plant Phenomics' high-quality content indexed in Scopus.See full indexing coverage
The open access journal Plant Phenomics, published in association with NAU, publishes novel research that advances plant phenotyping and connects phenomics with other research domains.
Plant Phenomics' editorial board is led by Seishi Ninomiya (University of Tokyo), Frédéric Baret (French National Institute of Agricultural Research), and Zong-Ming Cheng (Nanjing Agricultural University/University of Tennessee) and is comprised of leading experts in the field.
Latest ArticlesMore articles
Global Wheat Head Detection 2021: An Improved Dataset for Benchmarking Wheat Head Detection Methods
The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version.
Exploring Seasonal and Circadian Rhythms in Structural Traits of Field Maize from LiDAR Time Series
Plant growth rhythm in structural traits is important for better understanding plant response to the ever-changing environment. Terrestrial laser scanning (TLS) is a well-suited tool to study structural rhythm under field conditions. Recent studies have used TLS to describe the structural rhythm of trees, but no consistent patterns have been drawn. Meanwhile, whether TLS can capture structural rhythm in crops is unclear. Here, we aim to explore the seasonal and circadian rhythms in maize structural traits at both the plant and leaf levels from time-series TLS. The seasonal rhythm was studied using TLS data collected at four key growth periods, including jointing, bell-mouthed, heading, and maturity periods. Circadian rhythms were explored by using TLS data acquired around every 2 hours in a whole day under standard and cold stress conditions. Results showed that TLS can quantify the seasonal and circadian rhythm in structural traits at both plant and leaf levels. (1) Leaf inclination angle decreased significantly between the jointing stage and bell-mouthed stage. Leaf azimuth was stable after the jointing stage. (2) Some individual-level structural rhythms (e.g., azimuth and projected leaf area/PLA) were consistent with leaf-level structural rhythms. (3) The circadian rhythms of some traits (e.g., PLA) were not consistent under standard and cold stress conditions. (4) Environmental factors showed better correlations with leaf traits under cold stress than standard conditions. Temperature was the most important factor that significantly correlated with all leaf traits except leaf azimuth. This study highlights the potential of time-series TLS in studying outdoor agricultural chronobiology.
Field Phenomics: Will It Enable Crop Improvement?
Field phenomics has been identified as a promising enabling technology to assist plant breeders with the development of improved cultivars for farmers. Yet, despite much investment, there are few examples demonstrating the application of phenomics within a plant breeding program. We review recent progress in field phenomics and highlight the importance of targeting breeders’ needs, rather than perceived technology needs, through developing and enhancing partnerships between phenomics researchers and plant breeders.
Estimates of Maize Plant Density from UAV RGB Images Using Faster-RCNN Detection Model: Impact of the Spatial Resolution
Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices. The use of RGB images taken from UAVs may replace the traditional visual counting in fields with improved throughput, accuracy, and access to plant localization. However, high-resolution images are required to detect the small plants present at the early stages. This study explores the impact of image ground sampling distance (GSD) on the performances of maize plant detection at three-to-five leaves stage using Faster-RCNN object detection algorithm. Data collected at high resolution () over six contrasted sites were used for model training. Two additional sites with images acquired both at high and low () resolutions were used to evaluate the model performances. Results show that Faster-RCNN achieved very good plant detection and counting () performances when native high-resolution images are used both for training and validation. Similarly, good performances were observed () when the model is trained over synthetic low-resolution images obtained by downsampling the native training high-resolution images and applied to the synthetic low-resolution validation images. Conversely, poor performances are obtained when the model is trained on a given spatial resolution and applied to another spatial resolution. Training on a mix of high- and low-resolution images allows to get very good performances on the native high-resolution () and synthetic low-resolution () images. However, very low performances are still observed over the native low-resolution images (), mainly due to the poor quality of the native low-resolution images. Finally, an advanced super resolution method based on GAN (generative adversarial network) that introduces additional textural information derived from the native high-resolution images was applied to the native low-resolution validation images. Results show some significant improvement () compared to bicubic upsampling approach, while still far below the performances achieved over the native high-resolution images.
Classification of Soybean Pubescence from Multispectral Aerial Imagery
The accurate determination of soybean pubescence is essential for plant breeding programs and cultivar registration. Currently, soybean pubescence is classified visually, which is a labor-intensive and time-consuming activity. Additionally, the three classes of phenotypes (tawny, light tawny, and gray) may be difficult to visually distinguish, especially the light tawny class where misclassification with tawny frequently occurs. The objectives of this study were to solve both the throughput and accuracy issues in the plant breeding workflow, develop a set of indices for distinguishing pubescence classes, and test a machine learning (ML) classification approach. A principal component analysis (PCA) on hyperspectral soybean plot data identified clusters related to pubescence classes, while a Jeffries-Matusita distance analysis indicated that all bands were important for pubescence class separability. Aerial images from 2018, 2019, and 2020 were analyzed in this study. A 60-plot test (2019) of genotypes with known pubescence was used as reference data, while whole-field images from 2018, 2019, and 2020 were used to examine the broad applicability of the classification methodology. Two indices, a red/blue ratio and blue normalized difference vegetation index (blue NDVI), were effective at differentiating tawny and gray pubescence types in high-resolution imagery. A ML approach using a support vector machine (SVM) radial basis function (RBF) classifier was able to differentiate the gray and tawny types (83.1% accuracy and on a pixel basis) on images where reference training data was present. The tested indices and ML model did not generalize across years to imagery that did not contain the reference training panel, indicating limitations of using aerial imagery for pubescence classification in some environmental conditions. High-throughput classification of gray and tawny pubescence types is possible using aerial imagery, but light tawny soybeans remain difficult to classify and may require training data from each field season.
KAT4IA: -Means Assisted Training for Image Analysis of Field-Grown Plant Phenotypes
High-throughput phenotyping enables the efficient collection of plant trait data at scale. One example involves using imaging systems over key phases of a crop growing season. Although the resulting images provide rich data for statistical analyses of plant phenotypes, image processing for trait extraction is required as a prerequisite. Current methods for trait extraction are mainly based on supervised learning with human labeled data or semisupervised learning with a mixture of human labeled data and unsupervised data. Unfortunately, preparing a sufficiently large training data is both time and labor-intensive. We describe a self-supervised pipeline (KAT4IA) that uses -means clustering on greenhouse images to construct training data for extracting and analyzing plant traits from an image-based field phenotyping system. The KAT4IA pipeline includes these main steps: self-supervised training set construction, plant segmentation from images of field-grown plants, automatic separation of target plants, calculation of plant traits, and functional curve fitting of the extracted traits. To deal with the challenge of separating target plants from noisy backgrounds in field images, we describe a novel approach using row-cuts and column-cuts on images segmented by transform domain neural network learning, which utilizes plant pixels identified from greenhouse images to train a segmentation model for field images. This approach is efficient and does not require human intervention. Our results show that KAT4IA is able to accurately extract plant pixels and estimate plant heights.