FiberNet: A Compact and Efficient Convolutional Neural Network Model for Image Classification

  • Verner Rafael Ferreira Universidade Federal do Rio Grande do Norte
  • Anne Magaly de Paula Canuto Universidade Federal do Rio Grande do Norte

Resumo


Creating a convolutional neural network (CNN) with a minimal number of trainable parameters can offers benefits across diverse application domains. In our paper, presents FiberNet, an efficient CNN model with reduced trainable parameters that guarantees both high accuracy and swift inference. Through empirical analysis confirming its efficacy, the FiberNet model achieved a remarkable 96.25% accuracy on the sisal dataset, 74.90% on the CIFAR10 dataset, and incorporates a total of 754,345 trainable parameters.

Referências

Agrawal, A. and Mittal N. (2020). “Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy”. The Visual Computer, v. 36, no. 2, p. 405-412.

Chen, Y. et al (2019). “Drop an octave: Reducing spatial redundancy in convolutional neural networks with octave convolution”. In: Proceedings of the IEEE/CVF international conference on computer vision. p. 3435-3444.

Dumoulin, V. and Visin, F. (2016). “A guide to convolution arithmetic for deep learning”. arXiv preprint arXiv:1603.07285.

Friedman, M. (1937). “The use of ranks to avoid the assumption of normality implicit in the analysis of variance”. Journal of the American Statistical Association, v. 32, no. 200, p. 675-701.

Graham, B. (2014). “Fractional max-pooling”. arXiv preprint arXiv:1412.6071.

Geifman, A. (2020). “The Correct Way to Measure Inference Time of Deep Neural Networks”. Available at: [link] (Accessed on: March 22, 2021).

Gholamalinezhad, H. and Khosravi, H. (2020). “Pooling Methods in Deep Neural Networks, a Review”. arXiv preprint arXiv:2009.07485.

Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I. and Salakhutdinov, RR. (2012). “Improving neural networks by preventing co-adaptation of feature detectors”. arXiv preprint arXiv:1207.0580.

Howard, A. et al (2017). “Mobilenets: Efficient convolutional neural networks for mobile vision applications”. arXiv preprint arXiv:1704.04861.

Howard, A. et al (2019). “Searching for mobilenetv3”. In: Proceedings of the IEEE/CVF international conference on computer vision. p. 1314-1324.

Huang, G., Liu, Z., Maaten, L.V.D. and Weinberger, K.Q. (2017). “Densely connected convolutional networks”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 4700-4708.

Huang, G. et al (2018). “Condensenet: An efficient densenet using learned group convolutions”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 2752-2761.

Iandola, F.N. et al (2016). “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size”. arXiv preprint arXiv:1602.07360.

Kaiming, H. et al (2015). “Spatial pyramid pooling in deep convolutional networks for visual recognition”. IEEE transactions on pattern analysis and machine intelligence, v. 37, n. 9, p. 1904-1916.

Kaiming, H. et.al. (2016). “Deep residual learning for image recognition”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 770-778.

Krizhevsky. A., Sutskever, I. and Hinton, G.E. (2012). “Imagenet classification with deep convolutional neural networks”. Advances in neural information processing systems, v. 25, p. 1097-1105.

Ma, N., Zhang, X., Zheng, H., and Sun, J. (2018). “ShuffleNet v2: Practical guidelines for efficient cnn architecture design”. In: Proceedings of the European conference on computer vision (ECCV). p. 116-131.

Nemenyi, PB. (1963). “Distribution-free multiple comparisons”. Princeton University.

Pearson, K. (1904). “On the theory of contingency and its relation to association and normal correlation”. Drapers' Company Research Memoirs. Biometric series I: Dulau and Co.

Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L. (2018). “Mobilenetv2: Inverted residuals and linear bottlenecks”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 4510-4520.

Sepp, H. (1998). “The vanishing gradient problem during learning recurrent neural nets and problem solutions”. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, v. 6, no. 02, p. 107-116.

Simonyan, K. and Zisserman, A. (2014). “Very deep convolutional networks for large-scale image recognition”. arXiv preprint arXiv:1409.1556.

Szegedy, C. et al (2015). “Going deeper with convolutions”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 1-9.

Tan, M., and Le, Q., (2019). “EfficientNet: Rethinking model scaling for convolutional neural networks”. In: International Conference on Machine Learning. PMLR, p. 6105-6114.

Tan, M. et al (2019). Mnasnet: “Platform-aware neural architecture search for mobile”. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. p. 2820-2828.

Tan, M., and Le, Q., (2021). “Efficientnetv2: Smaller models and faster training”. In: International Conference on Machine Learning. PMLR, p. 10096-10106.

Teich, D.A. and Teich, P.R. (2018). PLASTER: “A Framework for Deep Learning Performance”. Tech. rep. TIRIAS Research.

Yu, F., Koltun, V. (2015). “Multi-scale context aggregation by dilated convolutions”. arXiv preprint arXiv:1511.07122.

Yu, F., Koltun, V., Funkhouser, T (2017). “Dilated residual networks”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 472-480.

Zagoruyko, S., Komodakis, N. (2016). “Wide residual networks”. arXiv preprint arXiv:1605.07146.

Zeiler, M.D., Taylor, G.W., Fergus, R. (2011). “Adaptive deconvolutional networks for mid and high-level feature learning”. In: 2011 international conference on computer vision. IEEE, p. 2018-2025.

Zeiler, M.D. and Fergus, R. (2014). “Visualizing and understanding convolutional networks”. In: Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13. Springer International Publishing, p. 818-833.
Publicado
25/09/2023
Como Citar

Selecione um Formato
FERREIRA, Verner Rafael; CANUTO, Anne Magaly de Paula. FiberNet: A Compact and Efficient Convolutional Neural Network Model for Image Classification. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 20. , 2023, Belo Horizonte/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 257-271. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2023.233990.