Universal Approximation Theorem for Tessarine-Valued Neural Networks

  • Rafael A. F. Carniello UNICAMP
  • Wington L. Vital UNICAMP
  • Marcos Eduardo Valle UFRPE

Resumo


The universal approximation theorem ensures that any continuous real-valued function defined on a compact subset can be approximated with arbitrary precision by a single hidden layer neural network. In this paper, we show that the universal approximation theorem also holds for tessarine-valued neural networks. Precisely, any continuous tessarine-valued function can be approximated with arbitrary precision by a single hidden layer tessarine-valued neural network with split activation functions in the hidden layer. A simple numerical example, confirming the theoretical result and revealing the superior performance of a tessarine-valued neural network over a real-valued model for interpolating a vector-valued function, is presented in the paper.

Referências

Arena, P., Fortuna, L., Muscato, G., and Xibilia, M. G. (1997). Multilayer perceptrons to approximate quaternion valued functions. Neural Networks, 10(2):335–342.

Arena, P., Fortuna, L., Muscato, G., and Xibilia, M. G. (1998). Quaternion algebra. In Neural Networks in Multidimensional Domains, volume 234 of Lecture Notes in Control and Information Sciences, pages 43–47. Springer London.

Arena, P., Fortuna, L., Occhipinti, L., and Xibilia, M. G. (1994). Neural networks for quaternion-valued function approximation. Proceedings IEEE International Symposium on Circuits and Systems, 6:307–310.

Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 1989 2:4, 2(4):303–314.

Ferreira Guilhoto, L. An Overview Of Artificial Neural Networks for Mathematicians.

Hanin, B. and Sellke, M. (2017). Approximating Continuous Functions by ReLU Nets of Minimal Width.

Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251–257.

Lu, Z., Pu, H., Wang, F., Hu, Z., and Wang, L. (2017). The Expressive Power of Neural Networks: A View from the Width. Advances in Neural Information Processing Systems, 2017-December:6232–6240.

Parcollet, T., Morchid, M., and Linarés, G. (2020). A survey of quaternion neural networks. Artificial Intelligence Review, 53(4):2957–2982.

Petersen, P. and Voigtlaender, F. (2018). Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Networks, 108:296–330.

Yarotsky, D. (2017). Error bounds for approximations with deep ReLU networks. Neural Networks, 94:103–114.
Publicado
29/11/2021
Como Citar

Selecione um Formato
CARNIELLO, Rafael A. F.; VITAL, Wington L.; VALLE, Marcos Eduardo. Universal Approximation Theorem for Tessarine-Valued Neural Networks. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 18. , 2021, Evento Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 233-243. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2021.18256.