Um Estudo sobre o uso de Modelos de Linguagem Abertos na Tarefa de Recomendação de Próximo Item
Resumo
Modelos de Linguagem de Larga Escala (LLMs) vêm sendo utilizados em sistemas de recomendação para melhorar a experiência dos usuários e reduzir a sobrecarga de informação. Com a popularidade da IA generativa, essa abordagem cresce e mostra resultados promissores. LLMs abertos são de grande interesse devido à sua acessibilidade e potencial para ajuste fino. Investigamos a eficácia de LLMs abertos em recomendação sequencial utilizando um método da literatura para recomendar novos itens, com e sem ajuste fino. Concluímos que LLMs de código abertos podem superar os proprietários, mesmo com menos parâmetros, e que o ajuste fino melhora o desempenho dos modelos, dependendo da exploração de hiperparâmetros e da qualidade dos dados.
Palavras-chave:
Sistemas de Recomendação Recomendação de Próximo Item Modelos de Linguagem Engenharia de Prompt Fine-tuning
Referências
Almazrouei, E. et al. (2023). The falcon series of open language models. arXiv, cs.CL, 2311.16867.
Bao, K. et al. (2023). TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1007–1014.
Brown, T. B. et al. (2020). Language models are few-shot learners. In Proc. of the 34th Intl. Conf. on Neural Information Processing Systems (NeurIPS), p. 1877–1901.
Dai, S. et al. (2023). Uncovering ChatGPT’s Capabilities in Recommender Systems. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1126–1132.
Fan, W. et al. (2023). Recommender Systems in the Era of Large Language Models (LLMs). arXiv, cs.IR, 2307.02046.
Harper, F. M. and Konstan, J. A. (2015). The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst., vol. 5, n. 4, p. 1–19.
Hou, Y. et al. (2024a). Bridging Language and Items for Retrieval and Recommendation. arXiv, cs.IR, 2403.03952.
Hou, Y. et al. (2024b). Large Language Models are Zero-Shot Rankers for Recommender Systems. In Proc. of the 46th European Conf. on Information Retrieval (ECIR), p. 364–381.
Houlsby, N. et al. (2019). Parameter-Efficient Transfer Learning for NLP. In Proc. of the 36th Intl. Conf. on Machine Learning (ICML), p. 2790–2799.
Hu, E. J. et al. (2022). LoRA: Low-Rank Adaptation of Large Language Models. In Proc. of the 10th Intl. Conf. on Learning Representations (ICLR), p. 1–13.
Jiang, A. Q. et al. (2023). Mistral 7B. arXiv, cs.CL, 2310.06825.
Liu, J. et al. (2023). Is ChatGPT a Good Recommender? A Preliminary Study. arXiv, cs.IR, 2304.10149.
Liu, Q. et al. (2024). ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. In Proc. of the 17th ACM Intl. Conf. on Web Search and Data Mining (WSDM), p. 452–461.
Lyu, H. et al. (2023). LLM-Rec: Personalized Recommendation via Prompting Large Language Models. arXiv, cs.CL, 2307.15780.
Rajput, S. et al. (2023). Recommender Systems with Generative Retrieval. In Proc. of 37th Conf. on Neural Information Processing Systems (NeurIPS), p. 1–17.
Sanner, S. et al. (2023). Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 890–896.
Shao, B., Li, X., and Bian, G. (2021). A survey of research hotspots and frontier trends of recommendation systems from the perspective of knowledge graph. Expert Systems with Applications, 165, p. 113764.
Touvron, H. et al. (2023). Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv, cs.CL, 2307.09288.
Wang, L. and Lim, E.-P. (2023). Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. arXiv, cs.IR, 2304.03153.
Wu, F. et al. (2020). MIND: A large-scale dataset for news recommendation. In Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, p. 3597–3606.
Xu, S. et al. (2023). OpenP5: An Open-Source Platform for Developing, Training, and Evaluating LLM-based Recommender Systems. arXiv, cs.IR, 2306.11134.
Zhang, J. et al. (2023). Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. arXiv, cs.IR, 2305.07001.
Bao, K. et al. (2023). TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1007–1014.
Brown, T. B. et al. (2020). Language models are few-shot learners. In Proc. of the 34th Intl. Conf. on Neural Information Processing Systems (NeurIPS), p. 1877–1901.
Dai, S. et al. (2023). Uncovering ChatGPT’s Capabilities in Recommender Systems. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1126–1132.
Fan, W. et al. (2023). Recommender Systems in the Era of Large Language Models (LLMs). arXiv, cs.IR, 2307.02046.
Harper, F. M. and Konstan, J. A. (2015). The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst., vol. 5, n. 4, p. 1–19.
Hou, Y. et al. (2024a). Bridging Language and Items for Retrieval and Recommendation. arXiv, cs.IR, 2403.03952.
Hou, Y. et al. (2024b). Large Language Models are Zero-Shot Rankers for Recommender Systems. In Proc. of the 46th European Conf. on Information Retrieval (ECIR), p. 364–381.
Houlsby, N. et al. (2019). Parameter-Efficient Transfer Learning for NLP. In Proc. of the 36th Intl. Conf. on Machine Learning (ICML), p. 2790–2799.
Hu, E. J. et al. (2022). LoRA: Low-Rank Adaptation of Large Language Models. In Proc. of the 10th Intl. Conf. on Learning Representations (ICLR), p. 1–13.
Jiang, A. Q. et al. (2023). Mistral 7B. arXiv, cs.CL, 2310.06825.
Liu, J. et al. (2023). Is ChatGPT a Good Recommender? A Preliminary Study. arXiv, cs.IR, 2304.10149.
Liu, Q. et al. (2024). ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. In Proc. of the 17th ACM Intl. Conf. on Web Search and Data Mining (WSDM), p. 452–461.
Lyu, H. et al. (2023). LLM-Rec: Personalized Recommendation via Prompting Large Language Models. arXiv, cs.CL, 2307.15780.
Rajput, S. et al. (2023). Recommender Systems with Generative Retrieval. In Proc. of 37th Conf. on Neural Information Processing Systems (NeurIPS), p. 1–17.
Sanner, S. et al. (2023). Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 890–896.
Shao, B., Li, X., and Bian, G. (2021). A survey of research hotspots and frontier trends of recommendation systems from the perspective of knowledge graph. Expert Systems with Applications, 165, p. 113764.
Touvron, H. et al. (2023). Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv, cs.CL, 2307.09288.
Wang, L. and Lim, E.-P. (2023). Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. arXiv, cs.IR, 2304.03153.
Wu, F. et al. (2020). MIND: A large-scale dataset for news recommendation. In Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, p. 3597–3606.
Xu, S. et al. (2023). OpenP5: An Open-Source Platform for Developing, Training, and Evaluating LLM-based Recommender Systems. arXiv, cs.IR, 2306.11134.
Zhang, J. et al. (2023). Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. arXiv, cs.IR, 2305.07001.
Publicado
14/10/2024
Como Citar
LIMA, Marcos Avner Pimenta de; SILVA, Eduardo Alves da; DA SILVA, Altigran Soares.
Um Estudo sobre o uso de Modelos de Linguagem Abertos na Tarefa de Recomendação de Próximo Item. In: SIMPÓSIO BRASILEIRO DE BANCO DE DADOS (SBBD), 39. , 2024, Florianópolis/SC.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 510-522.
ISSN 2763-8979.
DOI: https://doi.org/10.5753/sbbd.2024.240865.