A Study on the Acceptability of Approximate System Results in Artificial Neural Networks

  • Guilherme Saides Serbai UTFPR
  • Rogério Aparecido Gonçalves UTFPR
  • João Fabrício Filho UTFPR

Abstract


Approximate computing is a Computer Science field that improves performance and energy efficiency at the expense of controlled precision reduction. In the context of training and validating classification neural networks, investigating the impact of approximation on image quality is essential to determine how much data degradation can be tolerated without compromising result validity. This work examines the acceptability and quality of results under different levels of image approximation. We employ a residual neural network (ResNet-50) [He et al. 2016] and evaluate it across various training and validation scenarios using both approximated and non-approximated images from the Imagenette2 dataset [FastAI 2019]. Our objective is to investigate data acceptability thresholds and their relationship with the network’s prediction quality. The results demonstrate the correlation between acceptability and accuracy in neural network validation and training with approximated images. The ResNet50 [He et al. 2016] achieved accuracy ranging from (≥ 13.7% in scenarios with high training divergence) to (≥ 68.6% in conditions similar or identical to training), proving that approximate computing can be viable when data similarity is maintained - a crucial factor for energy-efficient and high-performance systems.

References

FastAI (2019). Imagenette2 dataset.

Felzmann, I., Fabrício Filho, J., de Oliveira, J. R., and Wanner, L. (2021). Special Session: How much quality is enough quality? A case for acceptability in approximate designs . In 2021 IEEE 39th International Conference on Computer Design (ICCD), pages 5–8, Los Alamitos, CA, USA. IEEE Computer Society.

Felzmann, I., Fabrício Filho, J., and Wanner, L. (2020). Risk-5: Controlled approximations for risc-v. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39(11):4052–4063.

He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778.

Li, T., Li, S., and Gupta, P. (2023). Training neural networks for execution on approximate hardware.

LoganKugler (2015). Is “good enough” computing good enough? the energy-accuracy trade-off in approximate computing. Communications of the ACM, 58(5):12.

Peng, Z., Chen, X., Xu, C., Jing, N., Liang, X., Lu, C., and Jiang, L. (2018). Axnet: Approximate computing using an end-to-end trainable neural network.

Zhang, Q., Wang, T., Tian, Y., Yuan, F., and Xu, Q. (2015). Approxann: An approximate computing framework for artificial neural network. In 2015 Design, Automation Test in Europe Conference Exhibition (DATE), pages 701–706.
Published
2025-05-28
SERBAI, Guilherme Saides; GONÇALVES, Rogério Aparecido; FABRÍCIO FILHO, João. A Study on the Acceptability of Approximate System Results in Artificial Neural Networks. In: REGIONAL SCHOOL OF HIGH PERFORMANCE COMPUTING FROM SÃO PAULO (ERAD-SP), 16. , 2025, São José do Rio Preto/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 50-53. DOI: https://doi.org/10.5753/eradsp.2025.9749.

Most read articles by the same author(s)

1 2 > >>