2021 Journal Impact Factor: 6.961

Plant Phenomics ranks #3 of 90 journals in Agronomy, #18 of 238 journals in Plant Sciences, #6 of 34 journals in Remote Sensing categories from Journal Citation Reports (Clarivate, 2022).

Click to learn more

Journal profile

The open access journal Plant Phenomics, published in association with NAU, publishes novel research that advances plant phenotyping and connects phenomics with other research domains.

Editorial board

Plant Phenomics' editorial board is led by Seishi Ninomiya (University of Tokyo), Frédéric Baret (French National Research Institute for Agriculture, Food and Environment), and Zong-Ming Cheng (Nanjing Agricultural University/University of Tennessee) and is comprised of leading experts in the field.

Latest Articles

More articles
Research Article

SegVeg: Segmenting RGB Images into Green and Senescent Vegetation by Combining Deep and Shallow Methods

Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest. We have developed the SegVeg approach for semantic segmentation of RGB images into three classes (background, green, and senescent vegetation). This is achieved in two steps: A U-net model is first trained on a very large dataset to separate whole vegetation from background. The green and senescent vegetation pixels are then separated using SVM, a shallow machine learning technique, trained over a selection of pixels extracted from images. The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks. Results show that the SegVeg approach allows to segment accurately the three classes. However, some confusion is observed mainly between the background and senescent vegetation, particularly over the dark and bright regions of the images. The U-net model achieves similar performances, with slight degradation over the green vegetation: the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net. The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent. Finally, the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels. Results show that green fraction is very well estimated () by the SegVeg model, while the senescent and background fractions show slightly degraded performances (, respectively) with a mean 95% confidence error interval of 2.7% and 2.1% for the senescent vegetation and background, versus 1% for green vegetation. We have made SegVeg publicly available as a ready-to-use script and model, along with the entire annotated grid-pixels dataset. We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or, at least, offering a pretrained model for more specific use.

Research Article

Assessing the Storage Root Development of Cassava with a New Analysis Tool

Storage roots of cassava plants crops are one of the main providers of starch in many South American, African, and Asian countries. Finding varieties with high yields is crucial for growing and breeding. This requires a better understanding of the dynamics of storage root formation, which is usually done by repeated manual evaluation of root types, diameters, and their distribution in excavated roots. We introduce a newly developed method that is capable to analyze the distribution of root diameters automatically, even if root systems display strong variations in root widths and clustering in high numbers. An application study was conducted with cassava roots imaged in a video acquisition box. The root diameter distribution was quantified automatically using an iterative ridge detection approach, which can cope with a wide span of root diameters and clustering. The approach was validated with virtual root models of known geometries and then tested with a time-series of excavated root systems. Based on the retrieved diameter classes, we show plausibly that the dynamics of root type formation can be monitored qualitatively and quantitatively. We conclude that this new method reliably determines important phenotypic traits from storage root crop images. The method is fast and robustly analyses complex root systems and thereby applicable in high-throughput phenotyping and future breeding.

Research Article

Deep Learning for Strawberry Canopy Delineation and Biomass Prediction from High-Resolution Images

Modeling plant canopy biophysical parameters at the individual plant level remains a major challenge. This study presents a workflow for automatic strawberry canopy delineation and biomass prediction from high-resolution images using deep neural networks. High-resolution (5 mm) RGB orthoimages, near-infrared (NIR) orthoimages, and Digital Surface Models (DSM), which were generated by Structure from Motion (SfM), were utilized in this study. Mask R-CNN was applied to the orthoimages of two band combinations (RGB and RGB-NIR) to identify and delineate strawberry plant canopies. The average detection precision rate and recall rate were 97.28% and 99.71% for RGB images and 99.13% and 99.54% for RGB-NIR images, and the mean intersection over union (mIoU) rates for instance segmentation were 98.32% and 98.45% for RGB and RGB-NIR images, respectively. Based on the center of the canopy mask, we imported the cropped RGB, NIR, DSM, and mask images of individual plants to vanilla deep regression models to model canopy leaf area and dry biomass. Two networks (VGG-16 and ResNet-50) were used as the backbone architecture for feature map extraction. The values of dry biomass models were about 0.76 and 0.79 for the VGG-16 and ResNet-50 networks, respectively. Similarly, the values of leaf area were 0.82 and 0.84, respectively. The RMSE values were approximately 8.31 and 8.73 g for dry biomass analyzed using the VGG-16 and ResNet-50 networks, respectively. Leaf area RMSE was 0.05 m2 for both networks. This work demonstrates the feasibility of deep learning networks in individual strawberry plant extraction and biomass estimation.

Research Article

BFP Net: Balanced Feature Pyramid Network for Small Apple Detection in Complex Orchard Environment

Despite of significant achievements made in the detection of target fruits, small fruit detection remains a great challenge, especially for immature small green fruits with a few pixels. The closeness of color between the fruit skin and the background greatly increases the difficulty of locating small target fruits in the natural orchard environment. In this paper, we propose a balanced feature pyramid network (BFP Net) for small apple detection. This network can balance information mapped to small apples from two perspectives: multiple-scale fruits on the different layers of FPN and a characteristic of a new extended feature from the output of ResNet50 conv1. Specifically, we design a weight-like feature fusion architecture on the lateral connection and top-down structure to alleviate the small-scale information imbalance on the different layers of FPN. Moreover, a new extended layer from ResNet50 conv1 is embedded into the lowest layer of standard FPN, and a decoupled-aggregated module is devised on this new extended layer of FPN to complement spatial location information and relieve the problem of locating small apple. In addition, a feature Kullback-Leibler distillation loss is introduced to transfer favorable knowledge from the teacher model to the student model. Experimental results show that of our method reaches 47.0%, 42.2%, and 35.6% on the benchmark of the GreenApple, MinneApple, and Pascal VOC, respectively. Overall, our method is not only slightly better than some state-of-the-art methods but also has a good generalization performance.

Research Article

EasyDAM_V2: Efficient Data Labeling Method for Multishape, Cross-Species Fruit Detection

In modern smart orchards, fruit detection models based on deep learning require expensive dataset labeling work to support the construction of detection models, resulting in high model application costs. Our previous work combined generative adversarial networks (GANs) and pseudolabeling methods to transfer labels from one specie to another to save labeling costs. However, only the color and texture features of images can be migrated, which still needs improvement in the accuracy of the data labeling. Therefore, this study proposes an EasyDAM_V2 model as an improved data labeling method for multishape and cross-species fruit detection. First, an image translation network named the Across-CycleGAN is proposed to generate fruit images from the source domain (fruit image with labels) to the target domain (fruit image without labels) even with partial shape differences. Then, a pseudolabel adaptive threshold selection strategy was designed to adjust the confidence threshold of the fruit detection model adaptively and dynamically update the pseudolabel to generate labels for images from the unlabeled target domain. In this paper, we use a labeled orange dataset as the source domain, and a pitaya, a mango dataset as the target domain, to evaluate the performance of the proposed method. The results showed that the average labeling precision values of the pitaya and mango datasets were 82.1% and 85.0%, respectively. Therefore, the proposed EasyDAM_V2 model is proven to be used for label transfer of cross-species fruit even with partial shape differences to reduce the cost of data labeling.

Research Article

Application of UAV Multisensor Data and Ensemble Approach for High-Throughput Estimation of Maize Phenotyping Traits

High-throughput estimation of phenotypic traits from UAV (unmanned aerial vehicle) images is helpful to improve the screening efficiency of breeding maize. Accurately estimating phenotyping traits of breeding maize at plot scale helps to promote gene mining for specific traits and provides a guarantee for accelerating the breeding of superior varieties. Constructing an efficient and accurate estimation model is the key to the application of UAV-based multiple sensors data. This study aims to apply the ensemble learning model to improve the feasibility and accuracy of estimating maize phenotypic traits using UAV-based red-green-blue (RGB) and multispectral sensors. The UAV images of four growth stages were obtained, respectively. The reflectance of visible light bands, canopy coverage, plant height (PH), and texture information were extracted from RGB images, and the vegetation indices were calculated from multispectral images. We compared and analyzed the estimation accuracy of single-type feature and multiple features for LAI (leaf area index), fresh weight (FW), and dry weight (DW) of maize. The basic models included ridge regression (RR), support vector machine (SVM), random forest (RF), Gaussian process (GP), and K-neighbor network (K-NN). The ensemble learning models included stacking and Bayesian model averaging (BMA). The results showed that the ensemble learning model improved the accuracy and stability of maize phenotypic traits estimation. Among the features extracted from UAV RGB images, the highest accuracy was obtained by the combination of spectrum, structure, and texture features. The model had the best accuracy constructed using all features of two sensors. The estimation accuracies of ensemble learning models, including stacking and BMA, were higher than those of the basic models. The coefficient of determination () of the optimal validation results were 0.852, 0.888, and 0.929 for LAI, FW, and DW, respectively. Therefore, the combination of UAV-based multisource data and ensemble learning model could accurately estimate phenotyping traits of breeding maize at plot scale.