A Comparative Study of Methods based on Deep Neural Networks for Self-reading of Energy Consumption in a Chatbot Application Context

  • Carlos V. M. Rocha UFMA
  • Pedro H. C. Vieira UFMA
  • Antonio M. Pinto UFMA
  • Pedro V. M. Bernhard UFMA
  • Ricardo J. F. Anchieta Junior UFMA
  • Ricardo C. S. Marques UFMA
  • Italo F. S. Silva UFMA
  • Simara V. Rocha UFMA
  • Aristófanes C. Silva UFMA
  • Hugo D. C. S. Nogueira Equatorial Energy Group
  • Eliana M. G. Monteiro Equatorial Energy Group

Resumo


Self-reading is a process in which the consumer is responsible for measuring his own energy consumption, which can be done through digital platforms, such as websites or mobile applications. The Equatorial Energy group's electric utilities have been working on developing a chatbot application through which consumers can send an image of their energy meter to a server that runs a method based on image processing and deep learning for the automatic recognition of consumption reading. However, these methods in a solution available to the public should consider factors such as response time and accuracy, so that it presents a satisfactory response time when it needs to handle a large number of simultaneous requests. Therefore, this paper presents a comparative study between approaches developed for the automatic recognition of consumption readings in images of electric meters sent to the server. Response time performances are analyzed through stress tests that simulate the real application scenario. The mean average precision (mAP) and the accuracy metrics of the methods are also analyzed in order to evaluate the generalization of the used convolutional neural networks.

Referências

ANEEL. (2019, apr) Energia no brasil e no mundo. [Online]. Available: http://www2.aneel.gov.br/arquivos/pdf/atlas_par1_cap2.pdf

H. Shuo, Y. Ximing, L. Donghang, L. Shaoli, and P. Yu, “Digital recognition of electric meter with deep learning,” in 2019 14th IEEE International Conference on Electronic Measurement & Instruments (ICEMI). IEEE, 2019, pp. 600–607.

M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L. Chen, “Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation,” CoRR, vol. abs/1801.04381, 2018. [Online]. Available: http://arxiv.org/abs/1801.04381

C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.

A. Calefati, I. Gallo, and S. Nawaz, “Reading meter numbers in the wild,” in 2019 Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2019, pp. 1–6.

G. Salomon, R. Laroca, and D. Menotti, “Deep learning for imagebased automatic dial meter reading: Dataset and baselines,” in 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020, pp. 1–8.

J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” CoRR, vol. abs/1506.02640, 2015. [Online]. Available: http://arxiv.org/abs/1506.02640

S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” CoRR, vol. abs/1506.01497, 2015. [Online]. Available: http://arxiv.org/abs/1506.01497

R. Laroca, V. Barroso, M. A. Diniz, G. R. Gonçalves, W. R. Schwartz, and D. Menotti, “Convolutional neural networks for automatic meter reading,” Journal of Electronic Imaging, vol. 28, no. 1, pp. 1 – 14, 2019. [Online]. Available: https://doi.org/10.1117/1.JEI.28.1.013023

A. Azeem, W. Riaz, A. Siddique, and U. A. K. Saifullah, “A robust automatic meter reading system based on mask-rcnn,” in 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA). IEEE, 2020, pp. 209–213.

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.

A. Serra, J. França, R. Marques, W. Figueredo, A. Reis, I. Santos, S. Rocha, A. Silva, E. Monteiro, I. Silva, M. Silva, and J. Santos, “Reconhecimento de dígitos em imagens de medidores de energia no contexto de um aplicativo de autoleitura,” 01 2019.

A. Serra, J. França, J. Sousa, R. Costa, I. Santos, S. Rocha, A. Silva, A. Paiva, E. Monteiro, I. Silva, M. Silva, and J. Santos, “Segmentação semântica de medidores de energia elétrica e componentes de identificação,” 01 2019.

W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed, C. Fu, and A. C. Berg, “SSD: single shot multibox detector,” CoRR, vol. abs/1512.02325, 2015. [Online]. Available: http://arxiv.org/abs/1512.02325

C. V. Rocha, G. X. Bras, J. E. Oliveira, A. G. Fernandes, A. A. Lima, A. M. Paiva, I. F. S. da Silva, S. V. da Rocha, E. M. G. Monteiro, and E. C. Fernandes, “Uma solução de chatbot para a realização de autoleitura do consumo de energia por meio de aplicativos de mensagens,” Anais da Sociedade Brasileira de Automática, vol. 2, no. 1, 2020.

T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980–2988.

T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 785–794.

M. Tan and Q. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. Long Beach, California, USA: PMLR, 09–15 Jun 2019, pp. 6105–6114. [Online]. Available: http://proceedings.mlr.press/v97/tan19a.html

C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Scaled-yolov4: Scaling cross stage partial network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13 029–13 038.

M. Tan, R. Pang, and Q. V. Le, “Efficientdet: Scalable and efficient object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 781–10 790.

A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.

C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “Cspnet: A new backbone that can enhance learning capability of cnn,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 390–391.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.

J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.

B. Recht, R. Roelofs, L. Schmidt, and V. Shankar, “Do imagenet classifiers generalize to imagenet?” in International Conference on Machine Learning. PMLR, 2019, pp. 5389–5400.

A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.

J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.

M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/

Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: Faster and better learning for bounding box regression,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 12993–13000.

A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
Publicado
18/10/2021
ROCHA, Carlos V. M. et al. A Comparative Study of Methods based on Deep Neural Networks for Self-reading of Energy Consumption in a Chatbot Application Context. In: WORKSHOP DE APLICAÇÕES INDUSTRIAIS - CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 34. , 2021, Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 233-239. DOI: https://doi.org/10.5753/sibgrapi.est.2021.20045.

Artigos mais lidos do(s) mesmo(s) autor(es)

1 2 > >>