Plant Phenomics Now Indexed in Scopus!
Readers can now find Plant Phenomics' high-quality content indexed in Scopus.See full indexing coverage
The open access journal Plant Phenomics, published in association with NAU, publishes novel research that advances plant phenotyping and connects phenomics with other research domains.
Plant Phenomics' editorial board is led by Seishi Ninomiya (University of Tokyo), Frédéric Baret (French National Institute of Agricultural Research), and Zong-Ming Cheng (Nanjing Agricultural University/University of Tennessee) and is comprised of leading experts in the field.
Latest ArticlesMore articles
High-Throughput Corn Image Segmentation and Trait Extraction Using Chlorophyll Fluorescence Images
Plant segmentation and trait extraction for individual organs are two of the key challenges in high-throughput phenotyping (HTP) operations. To address this challenge, the Ag Alumni Seed Phenotyping Facility (AAPF) at Purdue University utilizes chlorophyll fluorescence images (CFIs) to enable consistent and efficient automatic segmentation of plants of different species, age, or color. A series of image analysis routines were also developed to facilitate the quantitative measurements of key corn plant traits. A proof-of-concept experiment was conducted to demonstrate the utility of the extracted traits in assessing drought stress reaction of corn plants. The image analysis routines successfully measured several corn morphological characteristics for different sizes such as plant height, area, top-node height and diameter, number of leaves, leaf area, and angle in relation to the stem. Data from the proof-of-concept experiment showed how corn plants behaved when treated with different water regiments or grown in pot of different sizes. High-throughput image segmentation and analysis basing on a plant’s fluorescence image was proved to be efficient and reliable. Extracted trait on the segmented stem and leaves of a corn plant demonstrated the importance and utility of this kind of trait data in evaluating the performance of corn plant under stress. Data collected from corn plants grown in pots of different volumes showed the importance of using pot of standard size when conducting and reporting plant phenotyping data in a controlled-environment facility.
Qualification of Soybean Responses to Flooding Stress Using UAV-Based Imagery and Deep Learning
Soybean is sensitive to flooding stress that may result in poor seed quality and significant yield reduction. Soybean production under flooding could be sustained by developing flood-tolerant cultivars through breeding programs. Conventionally, soybean tolerance to flooding in field conditions is evaluated by visually rating the shoot injury/damage due to flooding stress, which is labor-intensive and subjective to human error. Recent developments of field high-throughput phenotyping technology have shown great potential in measuring crop traits and detecting crop responses to abiotic and biotic stresses. The goal of this study was to investigate the potential in estimating flood-induced soybean injuries using UAV-based image features collected at different flight heights. The flooding injury score (FIS) of 724 soybean breeding plots was taken visually by breeders when soybean showed obvious injury symptoms. Aerial images were taken on the same day using a five-band multispectral and an infrared (IR) thermal camera at 20, 50, and 80 m above ground. Five image features, i.e., canopy temperature, normalized difference vegetation index, canopy area, width, and length, were extracted from the images at three flight heights. A deep learning model was used to classify the soybean breeding plots to five FIS ratings based on the extracted image features. Results show that the image features were significantly different at three flight heights. The best classification performance was obtained by the model developed using image features at 20 m with 0.9 for the five-level FIS. The results indicate that the proposed method is very promising in estimating FIS for soybean breeding.
Deep Multiview Image Fusion for Soybean Yield Estimation in Breeding Applications
Reliable seed yield estimation is an indispensable step in plant breeding programs geared towards cultivar development in major row crops. The objective of this study is to develop a machine learning (ML) approach adept at soybean (Glycine max L. (Merr.)) pod counting to enable genotype seed yield rank prediction from in-field video data collected by a ground robot. To meet this goal, we developed a multiview image-based yield estimation framework utilizing deep learning architectures. Plant images captured from different angles were fused to estimate the yield and subsequently to rank soybean genotypes for application in breeding decisions. We used data from controlled imaging environment in field, as well as from plant breeding test plots in field to demonstrate the efficacy of our framework via comparing performance with manual pod counting and yield estimation. Our results demonstrate the promise of ML models in making breeding decisions with significant reduction of time and human effort and opening new breeding method avenues to develop cultivars.
UAS-Based Plant Phenotyping for Research and Breeding Applications
Unmanned aircraft system (UAS) is a particularly powerful tool for plant phenotyping, due to reasonable cost of procurement and deployment, ease and flexibility for control and operation, ability to reconfigure sensor payloads to diversify sensing, and the ability to seamlessly fit into a larger connected phenotyping network. These advantages have expanded the use of UAS-based plant phenotyping approach in research and breeding applications. This paper reviews the state of the art in the deployment, collection, curation, storage, and analysis of data from UAS-based phenotyping platforms. We discuss pressing technical challenges, identify future trends in UAS-based phenotyping that the plant research community should be aware of, and pinpoint key plant science and agronomic questions that can be resolved with the next generation of UAS-based imaging modalities and associated data analysis pipelines. This review provides a broad account of the state of the art in UAS-based phenotyping to reduce the barrier to entry to plant science practitioners interested in deploying this imaging modality for phenotyping in plant breeding and research areas.
Impact of Varying Light and Dew on Ground Cover Estimates from Active NDVI, RGB, and LiDAR
Canopy ground cover (GC) is an important agronomic measure for evaluating crop establishment and early growth. This study evaluates the reliability of GC estimates, in the presence of varying light and dew on leaves, from three different ground-based sensors: (1) normalized difference vegetation index (NDVI) from the commercially available GreenSeeker®; (2) RGB images from a digital camera, where GC was determined as the portion of pixels from each image meeting a greenness criterion (i.e., ); and (3) LiDAR using two separate approaches: (a) GC from LiDAR red reflectance (whereby red reflectance less than five was classified as vegetation) and (b) GC from LiDAR height (whereby height greater than 10 cm was classified as vegetation). Hourly measurements were made early in the season at two different growth stages (tillering and stem elongation), among wheat genotypes highly diverse for canopy characteristics. The active NDVI showed the least variation through time and was particularly stable, regardless of the available light or the presence of dew. In addition, between-sample-time Pearson correlations for NDVI were consistently high and significant (), ranging from 0.89 to 0.98. In comparison, GC from LiDAR and RGB showed greater variation across sampling times, and LiDAR red reflectance was strongly influenced by the presence of dew. Excluding times when the light was exceedingly low, correlations between GC from RGB and NDVI were consistently high (ranging from 0.79 to 0.92). The high reliability of the active NDVI sensor potentially affords a high degree of flexibility for users by enabling sampling across a broad range of acceptable light conditions.
Enhanced Field-Based Detection of Potato Blight in Complex Backgrounds Using Deep Learning
Rapid and automated identification of blight disease in potato will help farmers to apply timely remedies to protect their produce. Manual detection of blight disease can be cumbersome and may require trained experts. To overcome these issues, we present an automated system using the Mask Region-based convolutional neural network (Mask R-CNN) architecture, with residual network as the backbone network for detecting blight disease patches on potato leaves in field conditions. The approach uses transfer learning, which can generate good results even with small datasets. The model was trained on a dataset of 1423 images of potato leaves obtained from fields in different geographical locations and at different times of the day. The images were manually annotated to create over 6200 labeled patches covering diseased and healthy portions of the leaf. The Mask R-CNN model was able to correctly differentiate between the diseased patch on the potato leaf and the similar-looking background soil patches, which can confound the outcome of binary classification. To improve the detection performance, the original RGB dataset was then converted to HSL, HSV, LAB, XYZ, and YCrCb color spaces. A separate model was created for each color space and tested on 417 field-based test images. This yielded 81.4% mean average precision on the LAB model and 56.9% mean average recall on the HSL model, slightly outperforming the original RGB color space model. Manual analysis of the detection performance indicates an overall precision of 98% on leaf images in a field environment containing complex backgrounds.