A Context-Aware Approach for Filtering Empty Images in Camera Trap Data Using Siamese Network

  • Luiz Alencar UFAM
  • Fagner Cunha UFAM
  • Eulanda M. dos Santos UFAM

Resumo


This paper presents a method based on a Siamese convolutional neural network (CNN) for filtering empty images captured by camera traps. The proposed method takes into account information of the environment surrounding the camera by comparing captured images with empty reference images obtained regularly from the same capture locations. Reference images are expected to highlight local vegetation features such as rocks, mountains and lakes. By calculating the similarity between the two images, the Siamese network determines whether or not the captured image contains an animal. We present a protocol to provide image pairs to train the models, as well as the data augmentation techniques employed to enhance the training procedure. Three different CNN models are used as backbones for the Siamese network: MobileNetV2, ResNet50, and EfficientNetBO. In addition, experiments are conducted on three popular camera trap datasets: Snapshot Serengeti, Caltech and WCS. The results demonstrate the effectiveness of the proposed method due to the information of the capture location considered, and its potential for wildlife monitoring applications.
Publicado
06/11/2023
Como Citar

Selecione um Formato
ALENCAR, Luiz; CUNHA, Fagner; SANTOS, Eulanda M. dos. A Context-Aware Approach for Filtering Empty Images in Camera Trap Data Using Siamese Network. In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 36. , 2023, Rio Grande/RS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 85-90.