The Impact of Activation Patterns in the Explainability of Large Language Models – A Survey of recent advances

  • Mateus R. Figênio UTFPR
  • André Santanché UNICAMP
  • Luiz Gomes-Jr UTFPR

Resumo


The performance benchmarks of Natural Language Processing (NLP) tasks have been overwhelmed by Large Language Models (LLMs), with their capabilities outshining many previous approaches to language modeling. But, despite the success in these tasks and the more ample and pervasive use of these models in many daily and specialized fields of application, little is known of how or why they reach the outputs they do. This study reviews the development of Language Models (LMs), the advances in their explainability approaches, and focuses on assessing methods to interpret and explain the neural network portion of LMs (specially of Transformer models) as means of better understanding them.

Referências

Bau, A., Belinkov, Y., Sajjad, H., Durrani, N., Dalvi, F., and Glass, J. (2018). Identifying and controlling important neurons in neural machine translation.

Bender, E. M. and Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185–5198, Online. Association for Computational Linguistics.

Dalvi, F., Durrani, N., Sajjad, H., Belinkov, Y., Bau, A., and Glass, J. (2019). What is one grain of sand in the desert? analyzing individual neurons in deep nlp models. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):6309–6317.

DeRose, J. F., Wang, J., and Berger, M. (2020). Attention flows: Analyzing and comparing attention mechanisms in language models.

Geva, M., Caciularu, A., Wang, K. R., and Goldberg, Y. (2022). Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space.

Geva, M., Schuster, R., Berant, J., and Levy, O. (2021). Transformer feed-forward layers are key-value memories.

Hennigen, L. T., Williams, A., and Cotterell, R. (2020). Intrinsic probing through dimension selection. arXiv preprint arXiv:2010.02812.

Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9:1735–80.

Hoover, B., Strobelt, H., and Gehrmann, S. (2019). exbert: A visual analysis tool to explore learned representations in transformers models.

Lillicrap, T. P. and Kording, K. P. (2019). What does it mean to understand a neural network? Manning, C. and Schutze, H. (1999). Foundations of statistical natural language processing. MIT press.

Meng, K., Bau, D., Andonian, A., and Belinkov, Y. (2022). Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372.

Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., and Lowe, R. (2022). Training language models to follow instructions with human feedback. Technical report, OpenAI. Disponível em: [link].

Raganato, A. and Tiedemann, J. (2018). An analysis of encoder representations in transformer-based machine translation. In Linzen, T., Chrupała, G., and Alishahi, A., editors, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287–297, Brussels, Belgium. Association for Computational Linguistics.

Samek, W., Wiegand, T., and Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models.

Strobelt, H., Gehrmann, S., Behrisch, M., Perer, A., Pfister, H., and Rush, A. M. (2019). Seq2seq-vis: A visual debugging tool for sequence-to-sequence models. IEEE Transactions on Visualization and Computer Graphics, 25(1):353–363.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2023). Attention is all you need.

Zhao, H., Chen, H., Yang, F., Liu, N., Deng, H., Cai, H., Wang, S., Yin, D., and Du, M. (2024). Explainability for large language models: A survey. ACM Trans. Intell. Syst. Technol. Just Accepted.

Zini, J. E. and Awad, M. (2022). On the explainability of natural language processing deep models. ACM Computing Surveys, 55(5):1–31.
Publicado
10/04/2024
FIGÊNIO, Mateus R.; SANTANCHÉ, André; GOMES-JR, Luiz. The Impact of Activation Patterns in the Explainability of Large Language Models – A Survey of recent advances. In: ESCOLA REGIONAL DE BANCO DE DADOS (ERBD), 19. , 2024, Farroupilha/RS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 141-149. ISSN 2595-413X. DOI: https://doi.org/10.5753/erbd.2024.238868.