A study about the use of open Language Models in the Next-Item Recommendation task.

Abstract


Large Language Models (LLMs) have been employed in recommendation systems to enhance the user experience and reduce information overload. With the rise of generative AI, this approach has gained attention and demonstrated promising results. Open LLMs are of great interest due to their accessibility and potential for fine-tuning. We investigate the effectiveness of open LLMs in recommendation by utilizing a method from the literature to recommend new items, both with and without fine-tuning. We conclude that open LLMs can outperform closed ones, even with fewer parameters, and fine-tuning enhances the performance of models, depending on hyperparameter exploration and data quality.
Keywords: Recommender Systems Next-item recommendation Language Models Prompt Engineering Fine-tuning

References

Almazrouei, E. et al. (2023). The falcon series of open language models. arXiv, cs.CL, 2311.16867.

Bao, K. et al. (2023). TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1007–1014.

Brown, T. B. et al. (2020). Language models are few-shot learners. In Proc. of the 34th Intl. Conf. on Neural Information Processing Systems (NeurIPS), p. 1877–1901.

Dai, S. et al. (2023). Uncovering ChatGPT’s Capabilities in Recommender Systems. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 1126–1132.

Fan, W. et al. (2023). Recommender Systems in the Era of Large Language Models (LLMs). arXiv, cs.IR, 2307.02046.

Harper, F. M. and Konstan, J. A. (2015). The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst., vol. 5, n. 4, p. 1–19.

Hou, Y. et al. (2024a). Bridging Language and Items for Retrieval and Recommendation. arXiv, cs.IR, 2403.03952.

Hou, Y. et al. (2024b). Large Language Models are Zero-Shot Rankers for Recommender Systems. In Proc. of the 46th European Conf. on Information Retrieval (ECIR), p. 364–381.

Houlsby, N. et al. (2019). Parameter-Efficient Transfer Learning for NLP. In Proc. of the 36th Intl. Conf. on Machine Learning (ICML), p. 2790–2799.

Hu, E. J. et al. (2022). LoRA: Low-Rank Adaptation of Large Language Models. In Proc. of the 10th Intl. Conf. on Learning Representations (ICLR), p. 1–13.

Jiang, A. Q. et al. (2023). Mistral 7B. arXiv, cs.CL, 2310.06825.

Liu, J. et al. (2023). Is ChatGPT a Good Recommender? A Preliminary Study. arXiv, cs.IR, 2304.10149.

Liu, Q. et al. (2024). ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models. In Proc. of the 17th ACM Intl. Conf. on Web Search and Data Mining (WSDM), p. 452–461.

Lyu, H. et al. (2023). LLM-Rec: Personalized Recommendation via Prompting Large Language Models. arXiv, cs.CL, 2307.15780.

Rajput, S. et al. (2023). Recommender Systems with Generative Retrieval. In Proc. of 37th Conf. on Neural Information Processing Systems (NeurIPS), p. 1–17.

Sanner, S. et al. (2023). Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences. In Proc. of the 17th ACM Conf. on Recommender Systems (RecSys), p. 890–896.

Shao, B., Li, X., and Bian, G. (2021). A survey of research hotspots and frontier trends of recommendation systems from the perspective of knowledge graph. Expert Systems with Applications, 165, p. 113764.

Touvron, H. et al. (2023). Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv, cs.CL, 2307.09288.

Wang, L. and Lim, E.-P. (2023). Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. arXiv, cs.IR, 2304.03153.

Wu, F. et al. (2020). MIND: A large-scale dataset for news recommendation. In Proc. of the 58th Annual Meeting of the Association for Computational Linguistics, p. 3597–3606.

Xu, S. et al. (2023). OpenP5: An Open-Source Platform for Developing, Training, and Evaluating LLM-based Recommender Systems. arXiv, cs.IR, 2306.11134.

Zhang, J. et al. (2023). Recommendation as Instruction Following: A Large Language Model Empowered Recommendation Approach. arXiv, cs.IR, 2305.07001.
Published
2024-10-14
LIMA, Marcos Avner Pimenta de; SILVA, Eduardo Alves da; DA SILVA, Altigran Soares. A study about the use of open Language Models in the Next-Item Recommendation task.. In: BRAZILIAN SYMPOSIUM ON DATABASES (SBBD), 39. , 2024, Florianópolis/SC. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 510-522. ISSN 2763-8979. DOI: https://doi.org/10.5753/sbbd.2024.240865.