A Deep Learning-Based Phenotypic Analysis of Rice Root Distribution from Field ImagesRead the full article
The open access journal Plant Phenomics, published in association with NAU, publishes novel research that advances plant phenotyping and connects phenomics with other research domains.
Plant Phenomics' editorial board is led by Seishi Ninomiya (University of Tokyo), Frédéric Baret (French National Institute of Agricultural Research), and Zong-Ming Cheng (Nanjing Agricultural University/University of Tennessee) and is comprised of leading experts in the field.
Latest ArticlesMore articles
Machine Learning-Based Presymptomatic Detection of Rice Sheath Blight Using Spectral Profiles
Early detection of plant diseases, prior to symptom development, can allow for targeted and more proactive disease management. The objective of this study was to evaluate the use of near-infrared (NIR) spectroscopy combined with machine learning for early detection of rice sheath blight (ShB), caused by the fungus Rhizoctonia solani. We collected NIR spectra from leaves of ShB-susceptible rice (Oryza sativa L.) cultivar, Lemont, growing in a growth chamber one day following inoculation with R. solani, and prior to the development of any disease symptoms. Support vector machine (SVM) and random forest, two machine learning algorithms, were used to build and evaluate the accuracy of supervised classification-based disease predictive models. Sparse partial least squares discriminant analysis was used to confirm the results. The most accurate model comparing mock-inoculated and inoculated plants was SVM-based and had an overall testing accuracy of 86.1% (), while when control, mock-inoculated, and inoculated plants were compared the most accurate SVM model had an overall testing accuracy of 73.3% (). These results suggest that machine learning models could be developed into tools to diagnose infected but asymptomatic plants based on spectral profiles at the early stages of disease development. While testing and validation in field trials are still needed, this technique holds promise for application in the field for disease diagnosis and management.
Coffee Flower Identification Using Binarization Algorithm Based on Convolutional Neural Network for Digital Images
Crop-type identification is one of the most significant applications of agricultural remote sensing, and it is important for yield estimation prediction and field management. At present, crop identification using datasets from unmanned aerial vehicle (UAV) and satellite platforms have achieved state-of-the-art performances. However, accurate monitoring of small plants, such as the coffee flower, cannot be achieved using datasets from these platforms. With the development of time-lapse image acquisition technology based on ground-based remote sensing, a large number of small-scale plantation datasets with high spatial-temporal resolution are being generated, which can provide great opportunities for small target monitoring of a specific region. The main contribution of this paper is to combine the binarization algorithm based on OTSU and the convolutional neural network (CNN) model to improve coffee flower identification accuracy using the time-lapse images (i.e., digital images). A certain number of positive and negative samples are selected from the original digital images for the network model training. Then, the pretrained network model is initialized using the VGGNet and trained using the constructed training datasets. Based on the well-trained CNN model, the coffee flower is initially extracted, and its boundary information can be further optimized by using the extracted coffee flower result of the binarization algorithm. Based on the digital images with different depression angles and illumination conditions, the performance of the proposed method is investigated by comparison of the performances of support vector machine (SVM) and CNN model. Hence, the experimental results show that the proposed method has the ability to improve coffee flower classification accuracy. The results of the image with a 52.5° angle of depression under soft lighting conditions are the highest, and the corresponding Dice (F1) and intersection over union (IoU) have reached 0.80 and 0.67, respectively.
Repeated Multiview Imaging for Estimating Seedling Tiller Counts of Wheat Genotypes Using Drones
Early generation breeding nurseries with thousands of genotypes in single-row plots are well suited to capitalize on high throughput phenotyping. Nevertheless, methods to monitor the intrinsically hard-to-phenotype early development of wheat are yet rare. We aimed to develop proxy measures for the rate of plant emergence, the number of tillers, and the beginning of stem elongation using drone-based imagery. We used RGB images (ground sampling distance of 3 mm pixel-1) acquired by repeated flights (≥ 2 flights per week) to quantify temporal changes of visible leaf area. To exploit the information contained in the multitude of viewing angles within the RGB images, we processed them to multiview ground cover images showing plant pixel fractions. Based on these images, we trained a support vector machine for the beginning of stem elongation (GS30). Using the GS30 as key point, we subsequently extracted plant and tiller counts using a watershed algorithm and growth modeling, respectively. Our results show that determination coefficients of predictions are moderate for plant count (), but strong for tiller count () and GS30 (). Heritabilities are superior to manual measurements for plant count and tiller count, but inferior for GS30 measurements. Increasing the selection intensity due to throughput may overcome this limitation. Multiview image traits can replace hand measurements with high efficiency (85–223%). We therefore conclude that multiview images have a high potential to become a standard tool in plant phenomics.
High-Throughput Phenotyping of Dynamic Canopy Traits Associated with Stay-Green in Grain Sorghum
Drought is a recurring phenomenon that puts crop yields at risk and threatens the livelihoods of many people around the globe. Stay-green is a drought adaption phenotype found in sorghum and other cereals. Plants expressing this phenotype show less drought-induced senescence and maintain functional green leaves for longer when water limitation occurs during grain fill, conferring benefits in both yield per se and harvestability. The physiological causes of the phenotype are postulated to be water saving through mechanisms such as reduced canopy size or access to extra water through mechanisms such as deeper roots. In sorghum breeding programs, stay-green has traditionally been assessed by comparing visual scores of leaf senescence either by identifying final leaf senescence or by estimating rate of leaf senescence. In this study, we compared measurements of canopy dynamics obtained from remote sensing on two sorghum breeding trials to stay-green values (breeding values) obtained from visual leaf senescence ratings in multienvironment breeding trials to determine which components of canopy development were most closely linked to the stay-green phenotype. Surprisingly, canopy size as estimated using preflowering canopy parameters was weakly correlated with stay-green values for leaf senescence while postflowering canopy parameters showed a much stronger association with leaf senescence. Our study suggests that factors other than canopy size have an important role in the expression of a stay-green phenotype in grain sorghum and further that the use of UAVs with multispectral sensors provides an excellent way of measuring canopy traits of hundreds of plots grown in large field trials.
High-Throughput Rice Density Estimation from Transplantation to Tillering Stages Using Deep Networks
Rice density is closely related to yield estimation, growth diagnosis, cultivated area statistics, and management and damage evaluation. Currently, rice density estimation heavily relies on manual sampling and counting, which is inefficient and inaccurate. With the prevalence of digital imagery, computer vision (CV) technology emerges as a promising alternative to automate this task. However, challenges of an in-field environment, such as illumination, scale, and appearance variations, render gaps for deploying CV methods. To fill these gaps towards accurate rice density estimation, we propose a deep learning-based approach called the Scale-Fusion Counting Classification Network (SFC2Net) that integrates several state-of-the-art computer vision ideas. In particular, SFC2Net addresses appearance and illumination changes by employing a multicolumn pretrained network and multilayer feature fusion to enhance feature representation. To ameliorate sample imbalance engendered by scale, SFC2Net follows a recent blockwise classification idea. We validate SFC2Net on a new rice plant counting (RPC) dataset collected from two field sites in China from 2010 to 2013. Experimental results show that SFC2Net achieves highly accurate counting performance on the RPC dataset with a mean absolute error (MAE) of 25.51, a root mean square error (MSE) of 38.06, a relative MAE of 3.82%, and a of 0.98, which exhibits a relative improvement of 48.2% w.r.t. MAE over the conventional counting approach CSRNet. Further, SFC2Net provides high-throughput processing capability, with 16.7 frames per second on images. Our results suggest that manual rice counting can be safely replaced by SFC2Net at early growth stages. Code and models are available online at https://git.io/sfc2net.
Global Wheat Head Detection (GWHD) Dataset: A Large and Diverse Dataset of High-Resolution RGB-Labelled Images to Develop and Benchmark Wheat Head Detection Methods
The detection of wheat heads in plant images is an important task for estimating pertinent wheat traits including head population density and head characteristics such as health, size, maturity stage, and the presence of awns. Several studies have developed methods for wheat head detection from high-resolution RGB imagery based on machine learning algorithms. However, these methods have generally been calibrated and validated on limited datasets. High variability in observational conditions, genotypic differences, development stages, and head orientation makes wheat head detection a challenge for computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse, and well-labelled dataset of wheat images, called the Global Wheat Head Detection (GWHD) dataset. It contains 4700 high-resolution RGB images and 190000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles, and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD dataset is publicly available at http://www.global-wheat.com/and aimed at developing and benchmarking methods for wheat head detection.