Abstract
|
Smart agriculture leverages advanced technologies such as the Internet of Things (IoT), deep learning, and computer vision to address challenges in pest and disease detection. Semantic image segmentation has emerged as a vital tool for monitoring plant health, enabling the detection of subtle anomalies and patterns through visual data analysis. Deep encoder-decoder architectures, such as U-Net, have shown significant potential in this domain. However, challenges in the encoding phase, including the loss of low-level features, incomplete texture analysis, and weak edge detection, often limit segmentation accuracy. To overcome these limitations, this study introduces an enhanced U-Net model that integrates ResNet-50 as the encoder and incorporates a channel-spatial attention block in the decoding phase. This attention block first compresses channel features and subsequently amplifies spatial features across different regions, improving the preservation and recovery of low-level features. This enhancement enables more precise and distinguishable segmentation of diseased regions in plant leaves. Experimental results highlight the effectiveness of the proposed approach, achieving an Intersection over Union (IoU) of 93.35% and a Dice coefficient of 0.9645% in plant disease segmentation tasks. These advancements demonstrate significant potential for real-world applications, facilitating accurate disease detection and efficient crop management in smart agriculture. The code and dataset associated with this research are publicly available on https://github.com/Faphnut/Plant-Disease-semanticsegmentation.git
|