Skip to main content

Research Article

Easy MPE: Extraction of Quality Microplot Images for UAV-Based High-Throughput Field Phenotyping

Léa Tresch1,2, Yue Mu3, Atsushi Itoh4, Akito Kaga5, Kazunori Taguchi4, Masayuki Hirafuji1, Seishi Ninomiya1,3, and Wei Guo1

1International Field Phenomics Research Laboratory, Institute for Sustainable Agro-Ecosystem Services, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Tokyo, Japan
2Montpellier SupAgro, Montpellier, France
3Plant Phenomics Research Center, Nanjing Agricultural University, Nanjing, China
4Memuro Upland Farming Research Station, Hokkaido Agricultural Research Center, National Agriculture and Food Research Organization, Hokkaido, Japan
5Institute of Crop Science, National Agriculture and Food Research Organization, Tsukuba City, Ibaraki, Japan
Correspondence should be addressed to Wei Guo; pj.ca.oykot-u.a.sasi@iewoug

Abstract

Microplot extraction (PE) is a necessary image processing step in unmanned aerial vehicle- (UAV-) based research on breeding fields. At present, it is manually using ArcGIS, QGIS, or other GIS-based software, but achieving the desired accuracy is time-consuming. We therefore developed an intuitive, easy-to-use semiautomatic program for MPE called Easy MPE to enable researchers and others to access reliable plot data UAV images of whole fields under variable field conditions. The program uses four major steps: (1) binary segmentation, (2) microplot extraction, (3) production of .shp files to enable further file manipulation, and (4) projection of individual microplots generated from the orthomosaic back onto the raw aerial UAV images to preserve the image quality. Crop rows were successfully identified in all trial fields. The performance of the proposed method was evaluated by calculating the intersection-over-union (IOU) ratio between microplots determined manually and by Easy MPE: the average IOU (±SD) of all trials was 91% (±3).

1. Introduction

One of the major aims of investigations to improve agricultural machinery has been to reduce yield gaps [1]. The way we understand fields has been revolutionized by advances in yield monitoring technology, which have allowed field data to be measured with more and more precision. As a result, crop yield can now be measured at the microplot scale, and the same level of accuracy has been achieved by seeders and other kinds of agricultural machinery.

In recent years, great advances in sensors, aeronautics, and high-performance computing, in particular, have made high-throughput phenotyping more effective and accessible [2]. The characterization of quantitative traits such as yield and stress tolerance at a fine scale allows even complex phenotypes to be identified [3]. Eventually, specific varieties derived from a large number of different lines can be selected to reduce yield variation (e.g., [410]). Breeding efficiency now depends on scaling up the phenotype identification step, that is, on high-throughput phenotyping, as high-throughput genotyping is already providing genomic information quickly and cheaply [2].

Unmanned aerial vehicles (UAVs) have become one of the most popular information retrieval tools for high-throughput phenotyping, owing to the reduced costs of purchasing and deploying them, their easier control and operation, and their higher sensor compatibility. Proposed image analysis pipelines for extracting useful phenotypic information normally include several steps: 3D mapping by structure-from-motion (SfM) and multiview stereo (MVS) techniques, generation of orthomosaic images (an image composed of several orthorectified aerial images stitched together) and digital surface models, extraction of microplots, and extraction of phenotypic traits. Among these steps, microplot extraction (MPE; i.e., cropping images to the size of individual subplots) is essential because it allows fields (or plots; that is to say, cultured areas made of several microplots) to be examined at a level of detail corresponding to the level of accuracy of technologies such as the yield monitoring and seeder technologies mentioned above, ultimately providing better results [4, 8].

There are three main ways to perform MPE:

(i)

Manually, usually by using geographic information system (GIS) software

(ii)

Semiautomatically, which still requires some manual input

(iii)

Automatically, which requires no manual input

To the best of our knowledge, no fully automatic microplot extraction method has been published so far.

Several semiautomatic techniques have been developed to extract microplots from UAV images. A method proposed by Hearst [9] takes advantage of device interconnectivity by using the information on the geolocalization of crop rows from the seeder. The resulting orthomosaic thus contains spatial landmarks that allow it to be divided into microplots. However, this approach requires access to sophisticated technology and a high skill level. Most methods use image analysis tools, and they all require the user to define the maximum borders of one subplot. The whole field is then divided into equal-sized replicate subplots [10]. These methods require the plot to be very regular and consistently organized, which may not be the case, especially for breeding fields not planted by machine. More recently, Khan and Miklavcic [11] proposed a grid-based extraction method in which the plot grid is adapted to match the actual positions of plots. This technique allows for more field heterogeneity, but it still requires the width/height, horizontal/vertical spacing, and orientation of a 2D vector grid cell to be specified interactively.

All of these techniques generally start with an orthomosaic image of the field rather than with raw images (i.e., unprocessed aerial images taken by the UAV, which are used to produce the orthomosaic). The consequent loss of image quality is an important consideration for high-throughput phenotyping.

Most MPE is still done manually, such as using shapefiles [12], the ArcGIS editor [13], the Fishnet function of Arcpy in ArcGIS [14], or the QGIS tool for plot extraction [15]. Manual extraction is not only time-consuming, depending on the size of the whole field, but also potentially less accurate because it is prone to human bias.

In this paper, we propose a semiautomated MPE solution based on image analysis. We tested two kinds of crops on the proposed method on six different field datasets. Sugarbeet and soybeen were chosen as test fields because both crops are of importance for feeding humanity [16, 17]. The growth pattern of the young plants fulfills the program requirements perfectly because they are generally planted in dense rows. Our ultimate goal was to elucidate the small differences in sugarbeet and soybean growth in relation to microclimatic conditions in each field [18, 19]. Therefore, the accurate identification of microplots allows the phenotypic traits of the crop in different locations of a field to be assessed, which could validate whether it was the correct program for high-throughput phenotyping of growth patterns or sizes correctly. First, as a preprocessing step, a binary segmented image is produced by one of two image segmentation methods: application of the Excess Green (ExG) vegetation index [20] followed by the Otsu threshold [21] or the EasyPCC program [22, 23]. The ExG index and the Otsu threshold have been shown to perform well at differentiating vegetation from nonplant elements (mostly bare soil) [2224]. The EasyPCC program uses machine learning to associate pixels with either vegetation or background areas, based on the manual selection of example plants in the selected images; this program can produce a segmented image of a field when the ExG index and Otsu threshold method may not work, for example, because of light and color variation or other outdoor environmental variations on the image [2224].

The preprocessing step is followed by MPE. To extract microplots, planted areas of the field are identified by summing the pixels classified as vegetation and then calculating the average. Microplot columns (90-degree rotation from crop rows) are identified by comparing vegetation pixel values with the average and erasing those below a threshold value. The same procedure is then applied to the column images to identify crop rows within each microplot column. The initial orthomosaic is then cropped to the borders identified by the program, and a shapefile is generated for each microplot to facilitate further image manipulation in GIS software or custom programs. Microplot intersection points are also saved in an Excel spreadsheet file.

Finally, if raw unprocessed aerial images and SfM files (containing information on internal and external camera parameters) are available, the program can directly extract the identified microplots from the raw images so that the microplots will contain the maximum level of detail.

2. Materials and Methods

2.1. Experimental Fields and Image Acquisition

Six field datasets were used as experimental units (Table 1, Table S1). The plants were sown in precise plot configurations, and all trial fields were managed according to the ordinary local management practices.

Table 1

Trial field and image acquisition information.
Alt text

Before image acquisition, the waypoint routing of the UAV was specified by using a self-developed software tool that automatically creates data formatted for import into Litchi for DJI drones (VC Technology, UK). This readable waypoint routing data allowed highly accurate repetition of flight missions so that all images had the same spatial resolution. Orthomosaics of the georeferenced raw images were then produced in Pix4Dmapper Pro software (Pix4D, Lausanne, Switzerland; Table S1).

UAVs overflew each field several times during the growing period of each crop. The flight dates in Table 1 are dates on which images that suited the program requirements were obtained (i.e., with uniform crop rows that did not touch each other and low content of weeds).

2.2. Easy Microplot Extraction

The Easy MPE code is written in the Python language (Python Software Foundation, Python Language Reference, v. 3.6.8 [24]) with the use of the following additional software packages: OpenCV v. 3.4.3 [25], pyQt5 v. 5.9.2 [26], NumPy v. 2.6.8 [27], Scikit-Image v. 0.14.1 [28], Scikit-Learn v. 0.20.1 [29], pyshp package v. 2.0.1 [30], rasterio v. 1.0.13 [31], rasterstats v. 0.13.0 [32], and Fiona v. 1.8.4 [33].

The main aim of the program is to identify columns, and crop rows within each column, on an orthomosaic image of a field so that microplot areas can be extracted. Additional outputs that the user may need for further analysis of the data are also provided. The program comprises four processing steps: binary segmentation (preprocessing), microplot identification and extraction, production of shapefile, and additional production of microplots extracted from raw individual images by reverse calculation.

The code was run on a laptop computer (Intel Core i7-8550U CPU @ 1.80 GHz, 16 GB RAM, Windows 10 Home 64-bit operating system) connected to an external graphics processing unit (NVIDIA, GeForce GTX 1080Ti).

Figure 1 shows the four major steps of the Easy MPE program, and the binary segmentation and microplot extraction details are described in smaller steps A–I. Orange arrows indicate automated steps. In steps A to B, however, the region of interest in the orthomosaic must be manually selected. Note that for simplicity and clarity, some steps have been omitted.

Figure 1

Alt text
Global pipeline of the Easy MPE program, demonstrated using dataset 3.

2.3. Binary Segmentation

The ExG vegetation index (step C) and the Otsu threshold (step D) are applied to each pixel of an orthomosaic image to produce a segmented binary image (Figure 1). Excess Green is calculated as where , , and are the normalized red, green, and blue channel values of a pixel.

In the case of a very bright or very dark environment, where the Otsu threshold does not perform well [3436], EasyPCC is recommended [22].

2.4. Microplot Extraction

MPE is the core step of the program. In this step, binary images of diverse field layouts are segmented (partitioned) into columns and crop rows.

First, the image is rotated (step E) using the coordinates of the bounding box drawn around the binary image. In step A, it is important to draw the boundaries of the region of interest so that no object that might be segmented lies between the boundaries and the crop; otherwise, the image will not be correctly rotated.

Second, columns are extracted by identifying local maxima along the -axis of the image, based on the sum of the white pixels distributed in the columns. To make the columns even, the binary image is slightly manipulated with an erosion followed by a horizontal dilatation. White pixels along the -axis are first summed, and their average is calculated. Then, all values that are less than 1/3 of the average are erased. The erased pixels represent “transition parts” at either end of the crop rows as well as small intercolumn weeds. Columns are then identified (step F) by scanning along the -axis: a transition from black to white pixels marks the left edge of a column, and the subsequent transition from white to black pixels marks its right edge.

Third, on the orthomosaic binary image, crop rows are identified within each column (step H). Because crop rows are more homogeneous along the -axis than the columns identified in step F, the intrarow variation in the sum of the white pixels is expected to be small; moreover, interrow weeds are expected to be frequent. Thus, values less than 1/2 of the average sum are erased in step H. Then, crop rows are identified by scanning along the -axis and identifying transitions in the segmented image from black to white or from white to black pixels.

According to the number of crop rows and the number of columns in each field input into the program, microplots are cropped and saved (step I). Their coordinates are calculated in the orthomosaic coordinate system, or if the orthomosaic does not include coordinates, from the positions of the pixels on the image.

2.5. Shapefile Production and Reverse Calculation

The program can output shapefiles of the microplots.

Shapefiles are produced by using the coordinates calculated during the MPE step, with the same characteristics as the orthomosaic image.

Reverse calculation is the process of projecting individual microplots determined from the orthomosaic image back onto the corresponding area of the raw images. It is aimed at preserving the raw image resolution, instead of the lower quality of the orthomosaic image (Figure 2), for estimation of phenotypic traits such as ground coverage [8].

Figure 2

Alt text
Quality diminution in an orthomosaic from dataset 3 (a) compared to the orthomosaic (b).

For the field trials, the following Pix4Dmapper Pro [37] outputs (Table 1) were used: P-Matrix (contains information on the internal and external camera parameters), offset (the difference between the local coordinate system of the digital surface model (DSM) and the output coordinate system), the DSM, and the raw image data used to produce an orthomosaic of the field. Equations (2), (3), (4), and (5) are used to determine the coordinates of each microplot in the raw images: where are the 3D coordinates of the microplot: the values of and are obtained during the microplot extraction step, and is the average height of the microplot in the DSM; are the corrected 3D coordinates of the microplots after they have been fit to the P-Matrix (PMat) coordinates; are intermediate 3D coordinates in the camera’s coordinate system that are used to convert 3D points into 2D points; and are the 2D pixel coordinates of the microplot in the raw image.

Easy MPE outputs a .csv file with 11 columns (column number, crop row number, raw image name, and the four corner coordinates of the microplot in the raw image). This output was chosen to exclude unwanted data that would unnecessarily increase computation times while including the information necessary for the user to be able to easily use the data.

Finally, computational times were determined by measuring the processing time required for each major step, because manual inputs are required in-between steps. Processing times were determined with the time module implemented in Python five times for each field.

3. Results

3.1. Parameter Input into the MPE Program

The input parameters of the MPE program are field orthomosaic, type of image (binary or RGB), noise, number of columns, number of crop rows per column, and global orientation of the columns in the orthomosaic (horizontal or vertical). The noise parameter is used after the segmentation. Objects smaller than the input value are removed in order to output a homogeneous image without signals labeled as noise by the user.

For each dataset, the inputs were either self-evident (type of image, number of columns, number of crop rows per column, and global orientation of the columns) or determined from the field conditions and image resolution (image input, noise). The targeted area in the orthomosaic (i.e., the field) had to be manually delimited so that the program would be applied to the desired area.

Datasets 5 and 6 were binary images, and datasets 1 to 4 were RGB images. One image, dataset 4, had to be resized because it was too large for our computer to handle.

Because the targeted area was manually delimited by the user, the program outputs could change slightly between trials.

Input details are available in Table S2.

3.2. Manually Produced Reference Microplots

To evaluate the performance of Easy MPE, we produced reference microplots manually and used them as ground-truth data. The desired program output consists of microplots, each having the number of columns and crop rows specified by the user, so that the user’s experimental design can be fit to the field or so that local field information can be determined more precisely.

Reference microplots were delimited by hand on orthomosaic images, as precisely as possible until the level of accuracy was judged by the user to be sufficient. We aimed to minimize the exclusion of the target-MPE plant pixels and minimize the inclusion of adjacent rows or columns. Shapefiles for the reference images were produced by using the grid tool in QGIS Las Palmas v. 2.18.24 software [38]. The free, open-source QGIS was used because it is intuitive and accessible to all users.

Although the manually produced reference microplots may include bias introduced by the user, their use does not compromise the study’s aim, which was to compare program-delimited microplots with manually delimited ones.

3.3. Performance Evaluation: Intersection over Union

The manually determined microplots were automatically compared with the program-generated microplots by using the saved coordinates in their shapefiles and a Python program. This program uses pyshp v. 2.0.1 [30] to retrieve the .shp coordinates from the shapefiles and shapely v. 1.6.4 [39] to compare the microplot areas.

We used the intersection-over-union (IOU) performance criterion [40] to evaluate the similarity between the predicted area and the ground-truth area of each microplot. IOU is a standard performance measure used to evaluate object category segmentation performance (Figure 3).

Figure 3

Alt text
Visual representation of the intersection (yellow area in (a)) and union (yellow area in (b)) areas of manually (blue) and program-determined (green) areas.

IOU is calculated as:

The output is a .csv file with six columns: plot identification number, the program .shp area, the manually determined .shp area, the intersection area, the union area, and the IOU.

3.4. Statistical Analysis

The population standard deviation is calculated for the IOU as where is the number of microplots in the targeted field, is the IOU ratio of microplot , and is the mean IOU of the targeted field.

Precision and recall are defined by equations (8) and (9), respectively:

TP indicates a true positive: what has been correctly considered by the program to be part of a microplot; thus, (yellow area in Figure 3(a)). FP indicates a false positive: what has been incorrectly considered by the program to be part of a microplot; thus, (green area in Figure 3(b)). FN means a false negative: areas in the manually determined shapefiles that were not considered to be part of microplots in the program shapefiles. Thus, (blue area in Figure 3(a)).

Thus, gives the percentage of the program-determined microplot area that has been correctly identified by the program, and is the percentage of the manually determined microplot area that has been correctly identified by the program. If the program-determined (manually determined) area is wholly included in the manually determined (program-determined) area, then will be equal to 100%.

We also calculated the standard deviations of and .

3.5. Comparison Results

The program successfully identified microplots in all trial fields (Figure 4).

Figure 4

Alt text
Comparisons of manually determined (yellow lines) and program-determined (red lines) microplot boundaries: (a–f) datasets 1–6.

The mean IOU among the datasets was 91% (±3%), indicating a 91% overlap between program-determined and manually determined microplot areas (Table 2). Moreover, among these fields, an IOU of <80% was obtained only for dataset 2, which comprised 20 individual microplots (the lowest IOU being 71%).

Table 2

Intersection-over-union results.
Alt text

Mean precision and recall were 95% (±2%) and 95% (±2%), respectively. These results indicate that neither manually determined nor program-determined microplots showed a tendency to be wholly included in the other. Therefore, the IOU can be understood to indicate a shifting of microplot boundaries between them.

3.6. Computation Time

Computation time depends, of course, on the computer used to run the program.

The computational times (Table 3) varied among the trial images, but some global trends can be observed by comparing the computational times in Table 3 with dataset information provided in Tables S1 and S2. The program was slower overall when dealing with larger images, which is to be expected because the program involves image manipulation. The computational time required for binary segmentation depended mostly on the type of input (binary or RGB), whereas the time required for microplot extraction and reverse calculation depended mainly on the number of microplots in the trial field, because this number determines the number of images that must be manipulated in each of these steps.

Table 3

Average computational times of Easy MPE per major step for each dataset.
Alt text

4. Discussion

The microplot results obtained by Easy MPE were similar to those obtained manually. However, manual identification means only that the accuracy is controlled and judged to be acceptable. Thus, the relationship between the IOU and “absolute accuracy” depends on the precision with which the reference plots are defined. The study results confirm that the accuracy of the program is similar to that obtained by manual identification of microplots, but a direct comparison of the results obtained by the two methods suggests that the program-determined microplots are more precise and more regular than those determined manually (Figure 4).

In addition, Easy MPE places a few conditions on the initial image. The field must be fairly homogeneous, and crop rows should not touch each other or not much. These conditions were met by both the soybean and sugarbeet fields in this study, but it is necessary to test other types of fields as well. Continuous field observations (including multiple drone flights) are likely to be required to get the usable image. Image pretreatment by removing weed pixels, either manually or automatically, would help Easy MPE to get good results but is not mandatory.

The only manual input to the program is plot delimitation by the user at the very beginning. It is essential that no segmented objects other than the field be included in the region of interest. It might be possible to automate this step by providing GPS coordinates from the seeder or by using an already cropped orthomosaic image, but either method would diminish the freedom of the user to apply the program to an area smaller than the whole field.

Also, please note that application of the program is limited by the available computer, as shown by the example of dataset 4.

The MPE method used by Easy MPE is different from previously published methods. Easy MPE uses image parameters and information on the field geometry to adapt itself to the field, whereas in the method proposed by Wallhead et al. [12], a cellular grid is laid over the image so that each individual rectangular cell is optimally aligned. Online software (e.g., [11]) also uses a grid and requires the user to indicate the positions of several microplots. Easy MPE asks for a working zone delimitation and no other image manipulation, increasing its user-friendliness. This method has been built as a piece of a suite of open-source programs for high-throughput phenotyping, along with Easy PCC [22] and future additions.

Other published methods do not include the reverse calculation step, which allows Easy MPE to provide images of the same quality as the raw images. The Easy MPE reverse calculation procedure is coded for Pix4D outputs, but outputs of free SfM software outputs could be used as well. Three files are required [41]:

(i)

The P-Matrix file, which contains information about the internal and external camera parameters, allowing the 3D coordinates to be converted into 2D coordinates

(ii)

A DSM file, which is the usual output of SfM software

(iii)

An offset file, which contains the offsets between local coordinate system points used in the creation of some of the output and the output coordinate system

Note that the code would need to be adapted to be able to extract the needed data from free SfM software outputs, but the adapted Easy MPE code would then be entirely free.

Other possible improvements include the implementation of a different vegetation index such as NDVI, SAVI, or GNDVI for preprocessing segmentation and the linking of EasyPCC to Easy MPE. Verification methods could also be added as an option; for example, the Hough transform or the average distance between crop rows within microplots could be used to verify that the field components are correctly identified. These methods could be used only with geometric fields that are very constant in the seed repartition; they would thus narrow the current applicability of the code.

Finally, the code needs to be further tested and improved by applying it to other important crops that are not as densely planted as soybean and sugarbeet, such as maize or wheat. Fields with crop residues have not been tested either in this demonstration. The impact of a poor quality orthomosaic has not been investigated and should be measured in order to give the plant phenotyping community good insights about the possible uses of Easy MPE.

Overall,

(i)

Easy MPE is recommended for its user-friendliness and simplicity; however, many points still have to be tested and approved, which leaves room for improvement. The microplots are delimited in an unbiased way, i.e., without human influence. It is part of an open-source suite of programs designed for the plant phenotyping community, includes a segmentation step if needed, and provides the first automation of the reverse calculation process

(ii)

grid-based programs are efficient as demonstrated by many publications. Improvement has been made recently, as in [11], and gives quite robust tools for MPE identification. It automatically requires crops to be rectangular-shaped, and adding a grid can be impacted by human perception, adding possible errors. It does not provide any additional services

(iii)

in [9], the process can be fully automatic if the process and GPS localization are extremely precise (RTK recommended), and the field can access a high level of technology and informatics competencies

Conflicts of Interest

The authors declare that they have no conflict of interest regarding this work or its publication.

Authors’ Contributions

LT developed the algorithm and Python code with input from WG; YM conducted the reverse calculations; AI developed the executable Windows program; WG, AK, and KT conceived, designed, and coordinated the field experiments; WG, MH, and SN supervised the entire study; LT wrote the paper with input from all authors. All authors read and approved the final manuscript.

Acknowledgments

The authors thank Dr. Shunji Kurokawa, Division of Crop Production Systems, Central Region Agricultural Research Center, NARO, Japan; technicians of the Institute for Sustainable Agro-ecosystem Services, The University of Tokyo; and members of the Memuro Upland Farming Research Station, Hokkaido Agricultural Research Center, NARO, for field management support. This work was partly funded by the CREST Program “Knowledge Discovery by Constructing AgriBigData” (JPMJCR1512) and the SICORP Program “Data Science-Based Farming Support System for Sustainable Crop Production under Climatic Change” of the Japan Science and Technology Agency and the “Smart-Breeding System for Innovative Agriculture (BAC3001)” of the Ministry of Agriculture, Forestry and Fisheries of Japan.

Supplementary Materials

Table S1: details about image acquisition of the trial fields. Table S2: Easy MPE program inputs for each dataset. (Supplementary Materials)

References

  1. D. Tilman, C. Balzer, J. Hill, and B. L. Befort, “Global food demand and the sustainable intensification of agriculture,” Proceedings of the National Academy of Sciences of the United States of America, vol. 108, no. 50, pp. 20260–20264, 2011. View at Publisher · View at Scopus · View at Google Scholar
  2. J. L. Araus and J. E. Cairns, “Field high-throughput phenotyping: the new crop breeding frontier,” Trends in Plant Science, vol. 19, pp. 52–61, 2014. View at Publisher · View at Scopus · View at Google Scholar
  3. G. J. Rebetzke, J. Jimenez-Berni, R. A. Fischer, D. M. Deery, and D. J. Smith, “Review: high-throughput phenotyping to enhance the use of crop genetic resources,” Frontiers in Plant Science, vol. 282, pp. 40–48, 2019. View at Publisher · View at Scopus · View at Google Scholar
  4. W. Guo, B. Zheng, A. B. Potgieter et al., “Aerial imagery analysis – quantifying appearance and number of sorghum heads for applications in breeding and agronomy,” Frontiers in Plant Science, vol. 9, article 1544, 2018. View at Publisher · View at Scopus · View at Google Scholar
  5. P. Hu, W. Guo, S. C. Chapman, Y. Guo, and B. Zheng, “Pixel size of aerial imagery constrains the applications of unmanned aerial vehicle in crop breeding,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 154, pp. 1–9, 2019. View at Publisher · View at Google Scholar
  6. S. Ghosal, B. Zheng, S. C. Chapman et al., “A weakly supervised deep learning framework for sorghum head detection and counting,” Plant Phenomics, vol. 2019, 2019. View at Publisher · View at Google Scholar
  7. M. F. Dreccer, G. Molero, C. Rivera-Amado, C. John-Bejai, and Z. Wilson, “Yielding to the image: how phenotyping reproductive growth can assist crop improvement and production,” Plant Science, vol. 282, pp. 73–82, 2019. View at Publisher · View at Scopus · View at Google Scholar
  8. T. Duan, B. Zheng, W. Guo, S. Ninomiya, Y. Guo, and S. C. Chapman, “Comparison of ground cover estimates from experiment plots in cotton, sorghum and sugarcane based on images and ortho-mosaics captured by UAV,” Functional Plant Biology, vol. 44, no. 1, pp. 169–183, 2017. View at Publisher · View at Scopus · View at Google Scholar
  9. A. A. Hearst, “Automatic extraction of plots from geo-registered UAS imagery of crop fields with complex planting schemes,” Open Access Theses, vol. 332, 2014, https://docs.lib.purdue.edu/open_access_theses/332. View at Google Scholar
  10. Solvi, “Zonal Statistics and trial plots analytics,” July 2019, https://help.solvi.nu/article/34-zonal-statistics-and-trial-plots-analytics.
  11. Z. Khan and S. J. Miklavcic, “An automatic field plot extraction method from aerial orthomosaic images,” Frontiers in Plant Science, vol. 10, p. 683, 2019. View at Publisher · View at Google Scholar
  12. M. Wallhead, H. Zhu, J. Sulik, and W. Stump, “A workflow for extracting plot-level biophysical indicators from aerially acquired multispectral imagery,” Plant Health Progress, vol. 18, no. 2, pp. 95-96, 2017. View at Publisher · View at Scopus · View at Google Scholar
  13. R. Makanza, M. Zaman-Allah, J. Cairns et al., “High-throughput phenotyping of canopy cover and senescence in maize field trials using aerial digital canopy imaging,” Remote Sensing, vol. 10, no. 2, p. 330, 2018. View at Publisher · View at Scopus · View at Google Scholar
  14. A. Haghighattalab, L. González Pérez, S. Mondal et al., “Application of unmanned aerial systems for high throughput phenotyping of large wheat breeding nurseries,” Plant Methods, vol. 12, no. 1, p. 35, 2016. View at Publisher · View at Scopus · View at Google Scholar
  15. X. Wang, D. Singh, S. Marla, G. Morris, and J. Poland, “Field-based high-throughput phenotyping of plant height in sorghum using different sensing technologies,” Plant Methods, vol. 14, p. 53, 2018. View at Publisher · View at Scopus · View at Google Scholar
  16. FAO, “Sugarbeet,” July 2019, http://www.fao.org/land-water/databases-and-software/crop-information/sugarbeet/en/.
  17. FAO, “Soybean,” July 2019, http://www.fao.org/land-water/databases-and-software/crop-information/soybean/en/.
  18. J. R. Lamichhane, J. Constantin, J. N. Aubertot, and C. Dürr, “Will climate change affect sugar beet establishment of the 21st century? Insights from a simulation study using a crop emergence model,” Field Crops Research, vol. 238, pp. 64–73, 2019. View at Publisher · View at Google Scholar
  19. M. D. Bhattarai, S. Secchi, and J. Schoof, “Projecting corn and soybeans yields under climate change in a Corn Belt watershed,” Agricultural Systems, vol. 152, pp. 90–99, 2017. View at Publisher · View at Scopus · View at Google Scholar
  20. G. E. Meyer and J. C. Neto, “Verification of color vegetation indices for automated crop imaging applications,” Computers and Electronics in Agriculture, vol. 63, no. 2, pp. 282–293, 2008. View at Publisher · View at Scopus · View at Google Scholar
  21. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at Publisher · View at Google Scholar
  22. W. Guo, B. Zheng, T. Duan, T. Fukatsu, S. Chapman, and S. Ninomiya, “EasyPCC: benchmark datasets and tools for high-throughput measurement of the plant canopy coverage ratio under field conditions,” Sensors, vol. 17, no. 4, p. 798, 2017. View at Publisher · View at Scopus · View at Google Scholar
  23. W. Guo, U. K. Rage, and S. Ninomiya, “Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model,” Computers and Electronics in Agriculture, vol. 96, pp. 58–66, 2013. View at Publisher · View at Scopus · View at Google Scholar
  24. June 2019, https://www.python.org.
  25. Itseez, “OpenCV 2018,” June 2019, https://github.com/itseez/opencv.
  26. June 2019, https://www.riverbankcomputing.com/software/pyqt/download5.
  27. T. E. Oliphant, A Guide to NumPy, Trelgol Publishing, 2006.
  28. S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias et al., “Scikit-image: image processing in Python,” PeerJ, vol. 2, article e453, 2014. View at Publisher · View at Scopus · View at Google Scholar
  29. F. Pedregosa, G. Varoquaux, A. Gramfort et al., “Scikit-learn: machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011. View at Google Scholar
  30. June 2019, https://github.com/GeospatialPython/pyshp.
  31. S. Gillies, B. Ward, A. S. Petersen et al., Rasterio: Geospatial Raster I/O for Python Programmers, Mapbox, 2013.
  32. June 2019, https://pythonhosted.org/rasterstats/index.html.
  33. S. Gillies, R. Buffat, K. Wurster et al., Fiona is OGR’s Neat, Nimble, No-Nonsense API, Toblerity, 2011.
  34. D. M. Woebbecke, G. E. Meyer, K. Von Bargen, and D. A. Mortensen, “Color indices for weed identification under various soil, residue, and lighting conditions,” Transactions of ASAE, vol. 38, no. 1, pp. 259–269, 1995. View at Publisher · View at Google Scholar
  35. E. Hamuda, M. Glavin, and E. Jones, “A survey of image processing techniques for plant extraction and segmentation in the field,” Computers and Electronics in Agriculture, vol. 125, pp. 184–199, 2016. View at Publisher · View at Scopus · View at Google Scholar
  36. M. P. Ponti, “Segmentation of low-cost remote sensing images combining vegetation indices and mean shift,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 1, pp. 67–70, 2013. View at Publisher · View at Scopus · View at Google Scholar
  37. June 2019, https://www.pix4d.com/.
  38. June 2019, http://qgis.osgeo.org.
  39. S. Gillies, O. Tonnhofer, J. Arnott et al., “Shapely: manipulation and analysis of geometric objects, toblerity.org,” December 2018, https://github.com/Toblerity/Shapely.
  40. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal visual object classes (VOC) challenge,” International Journal of Computer Vision, vol. 88, no. 2, pp. 303–338, 2010. View at Publisher · View at Scopus · View at Google Scholar
  41. Pix4D, “What does the Output Params Folder contain?” July 2019, https://support.pix4d.com/hc/en-us/articles/202977149-What-does-the-Output-Params-Folder-contain#label12.

Copyright © 2019 Léa Tresch et al. Exclusive Licensee Nanjing Agricultural University. Distributed under a Creative Commons Attribution License (CC BY 4.0).

  • Views 215
  • Citations 0
Altmetric Attention Score
Find out more