Learning CNN Filters From User-Drawn Image Markers for Coconut-Tree Image Classification

  • Italos de Souza Unicamp
  • Alexandre Falcão Unicamp

Resumo


Identifying species of trees in aerial images is essential for land-use classification, plantation monitoring, and impact assessment of natural disasters. The manual identification of trees in aerial images is tedious, costly, and error-prone, so automatic classification methods are necessary. Convolutional neural network (CNN) models have well succeeded in image classification applications from different domains. However, CNN models usually require intensive manual annotation to create large training sets. One may conceptually divide a CNN into convolutional layers for feature extraction and fully connected layers for feature space reduction and classification. We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor, reducing the number of required images to train the fully connected layers. The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes, allowing better user control and understanding of the training process. It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images against one of the most popular CNN models.
Palavras-chave: Design of convolutional neural networks (CNNs), interactive machine learning, remote sensing image analysis.
Publicado
07/11/2020
Como Citar

Selecione um Formato
DE SOUZA, Italos; FALCÃO, Alexandre. Learning CNN Filters From User-Drawn Image Markers for Coconut-Tree Image Classification. In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 33. , 2020, Evento Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 464-468.