• English
    • español
    • français
    • Deutsch
  • Deutsch 
    • English
    • español
    • français
    • Deutsch
  • Einloggen
Dokumentanzeige 
  •   TTU DSpace Startseite
  • ThinkTech
  • Faculty Research
  • Dokumentanzeige
  •   TTU DSpace Startseite
  • ThinkTech
  • Faculty Research
  • Dokumentanzeige
JavaScript is disabled for your browser. Some features of this site may not work without it.

Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models

Thumbnail
Öffnen
Main article with TTU Libraries cover page (2.324Mb)
Datum
2021
Autor
Lin, Zhe (TTU)
Guo, Wenxuan (TTU)
Metadata
Zur Langanzeige
Zusammenfassung
An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.
Citable Link
https://doi.org/10.3390/rs13142822
https://hdl.handle.net/2346/90407
Collections
  • Faculty Research

DSpace software copyright © 2002-2016  DuraSpace
Kontakt
TDL
Theme by 
Atmire NV
 

 

Stöbern

Gesamter BestandBereiche & SammlungenErscheinungsdatumAutorenTitelnSchlagwortenDepartmentDiese SammlungErscheinungsdatumAutorenTitelnSchlagwortenDepartment

Mein Benutzerkonto

Einloggen

Statistik

Benutzungsstatistik

DSpace software copyright © 2002-2016  DuraSpace
Kontakt
TDL
Theme by 
Atmire NV