Plant Phenomics Indexed in SCIE!

The journal has been indexed in Web of Science - Science Citation Index Expanded and is expected to receive its first Journal Impact Factor in 2022.

See full indexing coverage

Journal profile

The open access journal Plant Phenomics, published in association with NAU, publishes novel research that advances plant phenotyping and connects phenomics with other research domains.

Editorial board

Plant Phenomics' editorial board is led by Seishi Ninomiya (University of Tokyo), Frédéric Baret (French National Institute of Agricultural Research), and Zong-Ming Cheng (Nanjing Agricultural University/University of Tennessee) and is comprised of leading experts in the field.

Latest Articles

More articles
Research Article

GANana: Unsupervised Domain Adaptation for Volumetric Regression of Fruit

3D reconstruction of fruit is important as a key component of fruit grading and an important part of many size estimation pipelines. Like many computer vision challenges, the 3D reconstruction task suffers from a lack of readily available training data in most domains, with methods typically depending on large datasets of high-quality image-model pairs. In this paper, we propose an unsupervised domain-adaptation approach to 3D reconstruction where labelled images only exist in our source synthetic domain, and training is supplemented with different unlabelled datasets from the target real domain. We approach the problem of 3D reconstruction using volumetric regression and produce a training set of 25,000 pairs of images and volumes using hand-crafted 3D models of bananas rendered in a 3D modelling environment (Blender). Each image is then enhanced by a GAN to more closely match the domain of photographs of real images by introducing a volumetric consistency loss, improving performance of 3D reconstruction on real images. Our solution harnesses the cost benefits of synthetic data while still maintaining good performance on real world images. We focus this work on the task of 3D banana reconstruction from a single image, representing a common task in plant phenotyping, but this approach is general and may be adapted to any 3D reconstruction task including other plant species and organs.

Research Article

Detecting Sorghum Plant and Head Features from Multispectral UAV Imagery

In plant breeding, unmanned aerial vehicles (UAVs) carrying multispectral cameras have demonstrated increasing utility for high-throughput phenotyping (HTP) to aid the interpretation of genotype and environment effects on morphological, biochemical, and physiological traits. A key constraint remains the reduced resolution and quality extracted from “stitched” mosaics generated from UAV missions across large areas. This can be addressed by generating high-quality reflectance data from a single nadir image per plot. In this study, a pipeline was developed to derive reflectance data from raw multispectral UAV images that preserve the original high spatial and spectral resolutions and to use these for phenotyping applications. Sequential steps involved (i) imagery calibration, (ii) spectral band alignment, (iii) backward calculation, (iv) plot segmentation, and (v) application. Each step was designed and optimised to estimate the number of plants and count sorghum heads within each breeding plot. Using a derived nadir image of each plot, the coefficients of determination were 0.90 and 0.86 for estimates of the number of sorghum plants and heads, respectively. Furthermore, the reflectance information acquired from the different spectral bands showed appreciably high discriminative ability for sorghum head colours (i.e., red and white). Deployment of this pipeline allowed accurate segmentation of crop organs at the canopy level across many diverse field plots with minimal training needed from machine learning approaches.

Database/Software Article

Global Wheat Head Detection 2021: An Improved Dataset for Benchmarking Wheat Head Detection Methods

The Global Wheat Head Detection (GWHD) dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions. With an associated competition hosted in Kaggle, GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities. From this first experience, a few avenues for improvements have been identified regarding data size, head diversity, and label reliability. To address these issues, the 2020 dataset has been reexamined, relabeled, and complemented by adding 1722 images from 5 additional countries, allowing for 81,553 additional wheat heads. We now release in 2021 a new version of the Global Wheat Head Detection dataset, which is bigger, more diverse, and less noisy than the GWHD_2020 version.

Research Article

Exploring Seasonal and Circadian Rhythms in Structural Traits of Field Maize from LiDAR Time Series

Plant growth rhythm in structural traits is important for better understanding plant response to the ever-changing environment. Terrestrial laser scanning (TLS) is a well-suited tool to study structural rhythm under field conditions. Recent studies have used TLS to describe the structural rhythm of trees, but no consistent patterns have been drawn. Meanwhile, whether TLS can capture structural rhythm in crops is unclear. Here, we aim to explore the seasonal and circadian rhythms in maize structural traits at both the plant and leaf levels from time-series TLS. The seasonal rhythm was studied using TLS data collected at four key growth periods, including jointing, bell-mouthed, heading, and maturity periods. Circadian rhythms were explored by using TLS data acquired around every 2 hours in a whole day under standard and cold stress conditions. Results showed that TLS can quantify the seasonal and circadian rhythm in structural traits at both plant and leaf levels. (1) Leaf inclination angle decreased significantly between the jointing stage and bell-mouthed stage. Leaf azimuth was stable after the jointing stage. (2) Some individual-level structural rhythms (e.g., azimuth and projected leaf area/PLA) were consistent with leaf-level structural rhythms. (3) The circadian rhythms of some traits (e.g., PLA) were not consistent under standard and cold stress conditions. (4) Environmental factors showed better correlations with leaf traits under cold stress than standard conditions. Temperature was the most important factor that significantly correlated with all leaf traits except leaf azimuth. This study highlights the potential of time-series TLS in studying outdoor agricultural chronobiology.

Review Article

Field Phenomics: Will It Enable Crop Improvement?

Field phenomics has been identified as a promising enabling technology to assist plant breeders with the development of improved cultivars for farmers. Yet, despite much investment, there are few examples demonstrating the application of phenomics within a plant breeding program. We review recent progress in field phenomics and highlight the importance of targeting breeders’ needs, rather than perceived technology needs, through developing and enhancing partnerships between phenomics researchers and plant breeders.

Research Article

Estimates of Maize Plant Density from UAV RGB Images Using Faster-RCNN Detection Model: Impact of the Spatial Resolution

Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices. The use of RGB images taken from UAVs may replace the traditional visual counting in fields with improved throughput, accuracy, and access to plant localization. However, high-resolution images are required to detect the small plants present at the early stages. This study explores the impact of image ground sampling distance (GSD) on the performances of maize plant detection at three-to-five leaves stage using Faster-RCNN object detection algorithm. Data collected at high resolution () over six contrasted sites were used for model training. Two additional sites with images acquired both at high and low () resolutions were used to evaluate the model performances. Results show that Faster-RCNN achieved very good plant detection and counting () performances when native high-resolution images are used both for training and validation. Similarly, good performances were observed () when the model is trained over synthetic low-resolution images obtained by downsampling the native training high-resolution images and applied to the synthetic low-resolution validation images. Conversely, poor performances are obtained when the model is trained on a given spatial resolution and applied to another spatial resolution. Training on a mix of high- and low-resolution images allows to get very good performances on the native high-resolution () and synthetic low-resolution () images. However, very low performances are still observed over the native low-resolution images (), mainly due to the poor quality of the native low-resolution images. Finally, an advanced super resolution method based on GAN (generative adversarial network) that introduces additional textural information derived from the native high-resolution images was applied to the native low-resolution validation images. Results show some significant improvement () compared to bicubic upsampling approach, while still far below the performances achieved over the native high-resolution images.