Inter-Row Soybean Plantation Identification in Images to Support Automatic Alignment of a Weeder Machine
Resumo
This study explores a Computer Vision approach to identify inter-row planting in soybean areas. Related work already explores the same problem, but our work differs by focusing on inter-row identification to support the alignment of weeding machines (commonly used by small farmers who produce organic products). We created an experimental database with images collected with a camera attached to a weeder. The planting lines and inter-rows were manually labeled. To detect planting lines and inter-rows, we use two segmentation algorithms based on Convolutional Neural Networks (Mask R-CNN and YOLACT), achieving an accuracy of up to 0.656 with the interpolation of the obtained segmentation results. The segmentation results obtained made it possible to estimate the inter-rows satisfactorily. We provide a database of collected images, with the planting lines and inter-rows noted. With these results, we intend to create a solution in future work that allows automatic alignment of the weeder. We also plan to develop similar solutions for other crops (in addition to the soybeans explored in the experiments).Referências
Bai, Y., Zhang, B., Xu, N., Zhou, J., Shi, J., and Diao, Z. (2023). Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review. Computers and Electronics in Agriculture, 205:107584.
Barbosa, F. M. and Osório, F. S. (2023). Estudo de estratégia de aprendizado auto-supervisionado para aprimoramento da consistência temporal em modelo de segmentação semântica baseado em deep learning. In Seminário Integrado de Software e Hardware, pages 1–12.
Basso, M. and de Freitas, E. P. (2020). A uav guidance system using crop row detection and line follower algorithms. Journal of Intelligent & Robotic Systems, 97(3):605–621.
Bharati, P. and Pramanik, A. (2020). Deep learning techniques—r-cnn to mask r-cnn: a survey. Computational Intelligence in Pattern Recognition, pages 657–668.
Bolya, D., Zhou, C., Xiao, F., and Lee, Y. J. (2019). YOLACT: Real-time instance segmentation. In IEEE/CVF International Conference on Computer Vision, pages 9157–9166.
Bonadies, S. and Gadsden, S. A. (2019). An overview of autonomous crop row navigation strategies for unmanned ground vehicles. Engineering in Agriculture, Environment and Food, 12(1):24–31.
Bradski, G. and Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library. ”O’Reilly Media, Inc.”.
Chakravarthy, A. S., Sinha, S., Narang, P., Mandal, M., Chamola, V., and Yu, F. R. (2022). Dronesegnet: Robust aerial semantic segmentation for uav-based iot applications. IEEE Transactions on Vehicular Technology, 71(4):4277–4286.
Champ, J., Mora-Fallas, A., Goëau, H., Mata-Montero, E., Bonnet, P., and Joly, A. (2020). Instance segmentation for the fine detection of crop and weed plants by precision agricultural robots. Applications in plant sciences, 8(7):e11373.
Cheng, C., Fu, J., Su, H., and Ren, L. (2023). Recent advancements in agriculture robots: Benefits and challenges. Machines, 11(1):48.
Dias, M., Santos, C., Aguiar, M., Welfer, D., Pereira, A., and Ribeiro, M. (2023). Um novo método baseado em detector de dois estágios para segmentação de instância de lesões retinianas usando o modelo mask r-cnn e a biblioteca detectron2. In Seminário Integrado de Software e Hardware, pages 1–12. SBC.
Dutta, A. and Zisserman, A. (2019). The via annotation software for images, audio and video. In ACM International Conference on Multimedia, pages 2276–2279.
Haralick, R. M. and Shapiro, L. G. (1985). Image segmentation techniques. Computer vision, graphics, and image processing, 29(1):100–132.
He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). Mask R-CNN. In IEEE International Conference on Computer Vision, pages 2961–2969.
Illingworth, J. and Kittler, J. (1988). A survey of the hough transform. Computer vision, graphics, and image processing, 44(1):87–116.
Kanagasingham, S., Ekpanyapong, M., and Chaihan, R. (2020). Integrating machine vision-based row guidance with gps and compass-based routing to achieve autonomous navigation for a rice field weeding robot. Precision Agriculture, 21(4):831–855.
Kise, M. and Zhang, Q. (2008). Development of a stereovision sensing system for 3d crop row structure mapping and tractor guidance. Biosystems Engineering, 101(2):191–198.
Liang, X., Chen, B., Wei, C., and Zhang, X. (2022). Inter-row navigation line detection for cotton with broken rows. Plant Methods, 18(1):90.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In uropean Conference on Computer Vision, pages 740–755. Springer.
Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., and Terzopoulos, D. (2021). Image segmentation using deep learning: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(7):3523–3542.
Otsu, N. et al. (1975). A threshold selection method from gray-level histograms. Automatica, 11(285-296):23–27.
Purcell, W., Neubauer, T., and Mallinger, K. (2023). Digital twins in agriculture: Challenges and opportunities for environmental sustainability. Current Opinion in Environmental Sustainability, 61:101252.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In IEEE conference on computer vision and pattern recognition, pages 779–788.
Ren, S., He, K., Girshick, R., and Sun, J. (2016). Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137–1149.
Wang, C.-Y., Yeh, I.-H., and Liao, H.-Y. M. (2024). Yolov9: Learning what you want to learn using programmable gradient information. arXiv preprint arXiv:2402.13616.
Zou, Z., Chen, K., Shi, Z., Guo, Y., and Ye, J. (2023). Object detection in 20 years: A survey. Proceedings of the IEEE.
Barbosa, F. M. and Osório, F. S. (2023). Estudo de estratégia de aprendizado auto-supervisionado para aprimoramento da consistência temporal em modelo de segmentação semântica baseado em deep learning. In Seminário Integrado de Software e Hardware, pages 1–12.
Basso, M. and de Freitas, E. P. (2020). A uav guidance system using crop row detection and line follower algorithms. Journal of Intelligent & Robotic Systems, 97(3):605–621.
Bharati, P. and Pramanik, A. (2020). Deep learning techniques—r-cnn to mask r-cnn: a survey. Computational Intelligence in Pattern Recognition, pages 657–668.
Bolya, D., Zhou, C., Xiao, F., and Lee, Y. J. (2019). YOLACT: Real-time instance segmentation. In IEEE/CVF International Conference on Computer Vision, pages 9157–9166.
Bonadies, S. and Gadsden, S. A. (2019). An overview of autonomous crop row navigation strategies for unmanned ground vehicles. Engineering in Agriculture, Environment and Food, 12(1):24–31.
Bradski, G. and Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library. ”O’Reilly Media, Inc.”.
Chakravarthy, A. S., Sinha, S., Narang, P., Mandal, M., Chamola, V., and Yu, F. R. (2022). Dronesegnet: Robust aerial semantic segmentation for uav-based iot applications. IEEE Transactions on Vehicular Technology, 71(4):4277–4286.
Champ, J., Mora-Fallas, A., Goëau, H., Mata-Montero, E., Bonnet, P., and Joly, A. (2020). Instance segmentation for the fine detection of crop and weed plants by precision agricultural robots. Applications in plant sciences, 8(7):e11373.
Cheng, C., Fu, J., Su, H., and Ren, L. (2023). Recent advancements in agriculture robots: Benefits and challenges. Machines, 11(1):48.
Dias, M., Santos, C., Aguiar, M., Welfer, D., Pereira, A., and Ribeiro, M. (2023). Um novo método baseado em detector de dois estágios para segmentação de instância de lesões retinianas usando o modelo mask r-cnn e a biblioteca detectron2. In Seminário Integrado de Software e Hardware, pages 1–12. SBC.
Dutta, A. and Zisserman, A. (2019). The via annotation software for images, audio and video. In ACM International Conference on Multimedia, pages 2276–2279.
Haralick, R. M. and Shapiro, L. G. (1985). Image segmentation techniques. Computer vision, graphics, and image processing, 29(1):100–132.
He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). Mask R-CNN. In IEEE International Conference on Computer Vision, pages 2961–2969.
Illingworth, J. and Kittler, J. (1988). A survey of the hough transform. Computer vision, graphics, and image processing, 44(1):87–116.
Kanagasingham, S., Ekpanyapong, M., and Chaihan, R. (2020). Integrating machine vision-based row guidance with gps and compass-based routing to achieve autonomous navigation for a rice field weeding robot. Precision Agriculture, 21(4):831–855.
Kise, M. and Zhang, Q. (2008). Development of a stereovision sensing system for 3d crop row structure mapping and tractor guidance. Biosystems Engineering, 101(2):191–198.
Liang, X., Chen, B., Wei, C., and Zhang, X. (2022). Inter-row navigation line detection for cotton with broken rows. Plant Methods, 18(1):90.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In uropean Conference on Computer Vision, pages 740–755. Springer.
Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., and Terzopoulos, D. (2021). Image segmentation using deep learning: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(7):3523–3542.
Otsu, N. et al. (1975). A threshold selection method from gray-level histograms. Automatica, 11(285-296):23–27.
Purcell, W., Neubauer, T., and Mallinger, K. (2023). Digital twins in agriculture: Challenges and opportunities for environmental sustainability. Current Opinion in Environmental Sustainability, 61:101252.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In IEEE conference on computer vision and pattern recognition, pages 779–788.
Ren, S., He, K., Girshick, R., and Sun, J. (2016). Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137–1149.
Wang, C.-Y., Yeh, I.-H., and Liao, H.-Y. M. (2024). Yolov9: Learning what you want to learn using programmable gradient information. arXiv preprint arXiv:2402.13616.
Zou, Z., Chen, K., Shi, Z., Guo, Y., and Ye, J. (2023). Object detection in 20 years: A survey. Proceedings of the IEEE.
Publicado
21/07/2024
Como Citar
PANIZZON, Jailson Lucas; ORTONCELLI, André Roberto; SOUZA, Alinne C. Correa; SOUZA, Francisco Carlos M.; OLIVEIRA, Rafael Paes de.
Inter-Row Soybean Plantation Identification in Images to Support Automatic Alignment of a Weeder Machine. In: SEMINÁRIO INTEGRADO DE SOFTWARE E HARDWARE (SEMISH), 51. , 2024, Brasília/DF.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 217-227.
ISSN 2595-6205.
DOI: https://doi.org/10.5753/semish.2024.2994.