Otimizando Saídas Antecipadas em Redes Neurais Profundas: Como Lidar com Buffers?

Resumo


Redes Neurais Profundas com Saídas Antecipadas (EE-DNNs) inserem ramos laterais que permitem a inferência local quando a confiança ultrapassa um limiar pré-definido, reduzindo a dependência da nuvem. No entanto, um limiar fixo não se adapta às variações contextuais do mundo real. Este trabalho investiga a adaptação dinâmica dos limiares utilizando algoritmos de multi-armed bandits (MABs). Além disso, um buffer de entradas finito é introduzido para equilibrar o compromisso entre acurácia e latência, considerando a confiança e o tamanho da fila. Os resultados experimentais demonstram que os limiares ajustados por MABs convergem rapidamente em diversos contextos, enquanto o buffer garante um equilíbrio eficiente entre acurácia e latência.

Palavras-chave: Redes Neurais Profundas, Aprendizado por Reforço, Computação em Borda

Referências

Auer, P., et al. (2002). Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47, 235.

Casale, G., & Roveri, M. (2023). Scheduling inputs in early exit neural networks. IEEE Transactions on Computers.

Fang, B., Zeng, X., Zhang, F., Xu, H., & Zhang, M. (2020). FlexDNN: Input-adaptive on-device deep learning for efficient mobile vision. In IEEE/ACM Symposium on Edge Computing (SEC) (pp. 84–95).

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 770–778).

Hu, C., Bao, W., Wang, D., & Liu, F. (2019). Dynamic adaptive DNN surgery for inference acceleration on the edge. In INFOCOM (pp. 1423–1431).

Ju, W., Bao, W., Ge, L., & Yuan, D. (2021a). Dynamic early exit scheduling for deep neural network inference through contextual bandits. In ACM Conference on Information (pp. 823–832).

Ju, W., Bao, W., Yuan, D., Ge, L., & Zhou, B. B. (2021b). Learning early exit for deep neural network inference on mobile devices through multi-armed bandits. In IEEE/ACM CCGrid (pp. 11–20).

Kang, Y., Hauswald, J., Gao, C., Rovinski, A., et al. (2017). Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. In ACM Computer Architecture News, 45, 615–629.

Kim, G., & Park, J. (2020). Low cost early exit decision unit design for CNN accelerator. In IEEE International SoC Design Conference (pp. 127–128).

Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images. Master's Thesis, University of Toronto.

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25.

Laskaridis, S., Venieris, S. I., Almeida, M., Leontiadis, I., & Lane, N. D. (2020). SPINN: Synergistic progressive inference of neural networks over device and cloud. In ACM MobiCom (pp. 1–15).

Li, E., Zeng, L., Zhou, Z., & Chen, X. (2019). Edge AI: On-demand accelerating deep neural network inference via edge computing. IEEE Transactions on Wireless Communications, 19(1), 447–457.

Pacheco, R., Oliveira, F. R., & Couto, R. (2021b). Early-exit deep neural networks for distorted images: Providing an efficient edge offloading. In IEEE GLOBECOM (pp. 1–6).

Pacheco, R. G., Bajpai, D. J., Shifrin, M., Couto, R. S., Menasché, D. S., Hanawal, M. K., & Campista, M. E. M. (2024). UCBEE: A multi-armed bandit approach for early-exit in neural networks. IEEE Transactions on Network and Service Management.

Puterman, M. L. (2014). Markov decision processes: Discrete stochastic dynamic programming. John Wiley & Sons.

Satyanarayanan, M. (2017). The emergence of edge computing. Computer, 50(1), 30–39.

Shifrin, M., Menasché, D. S., Cohen, A., Goeckel, D., & Gurewitz, O. (2020). Optimal PHY configuration in wireless networks. IEEE/ACM Transactions on Networking, 28(6), 2601–2614.

Teerapittayanon, S., McDanel, B., & Kung, H.-T. (2016). Branchynet: Fast inference via early exiting from deep neural networks. In IEEE International Conference on Pattern Recognition (ICPR) (pp. 2464–2469).

Wang, M., Mo, J., Lin, J., Wang, Z., & Du, L. (2019a). Dynexit: A dynamic early-exit strategy for deep residual networks. In IEEE International Workshop on Signal Processing Systems (SiPS) (pp. 178–183).

Wang, Z., Bao, W., et al. (2019b). SEE: Scheduling early exit for mobile DNN inference during service outage. In ACM Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (pp. 279–288).
Publicado
19/05/2025
PACHECO, Roberto G.; BAJPAI, Divya J.; SHIFRIN, Mark; COUTO, Rodrigo S.; MENASCHÉ, Daniel Sadoc; HANAWAL, Manjesh K.; CAMPISTA, Miguel Elias M.. Otimizando Saídas Antecipadas em Redes Neurais Profundas: Como Lidar com Buffers?. In: SIMPÓSIO BRASILEIRO DE REDES DE COMPUTADORES E SISTEMAS DISTRIBUÍDOS (SBRC), 43. , 2025, Natal/RN. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 560-573. ISSN 2177-9384. DOI: https://doi.org/10.5753/sbrc.2025.6319.

Artigos mais lidos do(s) mesmo(s) autor(es)

<< < 1 2 3 4 > >>