Browsing by Author "Lin, Zhe (TTU)"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Cotton Stand Counting from Unmanned Aerial System Imagery Using MobileNet and CenterNet Deep Learning Models(2021) Lin, Zhe (TTU); Guo, Wenxuan (TTU)An accurate stand count is a prerequisite to determining the emergence rate, assessing seedling vigor, and facilitating site-specific management for optimal crop production. Traditional manual counting methods in stand assessment are labor intensive and time consuming for large-scale breeding programs or production field operations. This study aimed to apply two deep learning models, the MobileNet and CenterNet, to detect and count cotton plants at the seedling stage with unmanned aerial system (UAS) images. These models were trained with two datasets containing 400 and 900 images with variations in plant size and soil background brightness. The performance of these models was assessed with two testing datasets of different dimensions, testing dataset 1 with 300 by 400 pixels and testing dataset 2 with 250 by 1200 pixels. The model validation results showed that the mean average precision (mAP) and average recall (AR) were 79% and 73% for the CenterNet model, and 86% and 72% for the MobileNet model with 900 training images. The accuracy of cotton plant detection and counting was higher with testing dataset 1 for both CenterNet and MobileNet models. The results showed that the CenterNet model had a better overall performance for cotton plant detection and counting with 900 training images. The results also indicated that more training images are required when applying object detection models on images with different dimensions from training datasets. The mean absolute percentage error (MAPE), coefficient of determination (R2), and the root mean squared error (RMSE) values of the cotton plant counting were 0.07%, 0.98 and 0.37, respectively, with testing dataset 1 for the CenterNet model with 900 training images. Both MobileNet and CenterNet models have the potential to accurately and timely detect and count cotton plants based on high-resolution UAS images at the seedling stage. This study provides valuable information for selecting the right deep learning tools and the appropriate number of training images for object detection projects in agricultural applications.Item Effects of irrigation rates on cotton yield as affected by soil physical properties and topography in the southern high plains(2021) Neupane, Jasmine (TTU); Guo, Wenxuan (TTU); West, Charles P. (TTU); Zhang, Fangyuan (TTU); Lin, Zhe (TTU)Lack of precipitation and groundwater for irrigation limits crop production in semi-arid regions, such as the Southern High Plains (SHP). Advanced technologies, such as variable rate irrigation (VRI), can conserve water and improve water use efficiency for sustainable agriculture. However, the adoption of VRI is hindered by the lack of on-farm research focusing on the feasibility of VRI. The objective of this study was to assess the effect of irrigation rates on cotton yield as affected by soil physical properties and topography in the Southern High Plains. This study was conducted in two fields within a 194-ha commercially managed farm in Hale County, Texas, in 2017. An irrigation treatment with three rates was implemented in a randomized complete block design with two replications as separate blocks in each field. A total of 230 composite soil samples were collected from the farm in spring 2017 and analyzed for texture. Information on apparent soil electrical conductivity (ECa), elevation, and final yield were collected from the fields. A statistical model showed that the effect of irrigation rates on cotton yield depended on its interaction with soil physical properties and topography. For example, areas with slope >2% and sand content >50% had no significant response to higher irrigation rates. This model suggests that applying irrigation amounts based on the yield response can be a basis for VRI. This study provides valuable information for site-specific irrigation to optimize crop production in fields with significant variability in soil physical properties and topography.Item Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning(2020) Lin, Zhe (TTU); Guo, Wenxuan (TTU)Machine learning and computer vision technologies based on high-resolution imagery acquired using unmanned aerial systems (UAS) provide a potential for accurate and efficient high-throughput plant phenotyping. In this study, we developed a sorghum panicle detection and counting pipeline using UAS images based on an integration of image segmentation and a convolutional neural networks (CNN) model. A UAS with an RGB camera was used to acquire images (2.7 mm resolution) at 10-m height in a research field with 120 small plots. A set of 1,000 images were randomly selected, and a mask was developed for each by manually delineating sorghum panicles. These images and their corresponding masks were randomly divided into 10 training datasets, each with a different number of images and masks, ranging from 100 to 1,000 with an interval of 100. A U-Net CNN model was built using these training datasets. The sorghum panicles were detected and counted by a predicted mask through the algorithm. The algorithm was implemented using Python with the Tensorflow library for the deep learning procedure and the OpenCV library for the process of sorghum panicle counting. Results showed the accuracy had a general increasing trend with the number of training images. The algorithm performed the best with 1,000 training images, with an accuracy of 95.5% and a root mean square error (RMSE) of 2.5. The results indicate that the integration of image segmentation and the U-Net CNN model is an accurate and robust method for sorghum panicle counting and offers an opportunity for enhanced sorghum breeding efficiency and accurate yield estimation.