Fairness in Recommender Systems: A Regularization Techniques Analysis
Abstract
Machine Learning strategies are increasingly used in decision-making processes in several areas of knowledge due to the abundance of data currently available. Although it was expected that such an application would solve the problem of human bias, it was noted that some models presented unfair behaviors in relation to groups historically discriminated by reflecting existing bias in the employed datasets. This problem has aroused great academic interest in recent years and several definitions, metrics and methodologies have been proposed to measure and ensure fairness in these contexts. One particular area is Recommendation Systems, where the objective is to recommend relevant items to users, and in some contexts it is not desirable for these recommendations to be associated with protected attributes of these users. This problem can be characterized as group fairness, in which groups of users are treated equally in the Recommendation System. In this work, we analyze the effectiveness of the use of group fairness regularization in a film recommendation system for men and women using two proposed metrics inspired by group fairness in classification. Empirical results show that this strategy improves results in relation to group fairness metrics and has a low impact on the final quality of the recommendation.
References
Baeza-Yates, R., Ribeiro-Neto, B., et al. (1999). Modern information retrieval, volume 463. ACM press New York.
Brandão, M. A., Moro, M. M., and Almeida, J. M. (2013). Análise de fatores impactantes na recomendação de colaborações acadêmicas utilizando projeto fatorial. In Simpósio Brasileiro de Banco de Dados (SBBD), pages 1–5.
Burke, R., Sonboli, N., and Ordonez-Gauger, A. (2018). Balanced neighborhoods for multi-sided fairness in recommendation. In Proc. of the Int’l Conference on Fairness, Accountability and Transparency (FAT), pages 202-214.
Calders, T. and Verwer, S. (2010). Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2):277-292.
Celma, Ò. and Cano, P. (2008). From hits to niches? or how popular artists can bias music recommendation and discovery. In Proc. of the Workshop on Large-Scale Recommender Systems and the Netflix Prize Competition, pages 1-8.
Desrosiers, C. and Karypis, G. (2011). A comprehensive survey of neighborhood-based recommendation methods. Recommender systems handbook, pages 107-144.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2012). Fairness through awareness. In Proc. of the Innovations in Theoretical Computer Science Conference, pages 214-226.
Ekstrand, M. D. and Kluver, D. (2021). Exploring author gender in book rating and recommendation. User Modeling and User-Adapted Interaction, pages 1-44.
Ekstrand, M. D., Riedl, J. T., Konstan, J. A., et al. (2011). Collaborative filtering recommender systems. Foundations and Trends in Human-Computer Interaction, 4(2):81-173.
Hajian, S., Bonchi, F., and Castillo, C. (2016). Algorithmic bias: From discrimination discovery to fairness-aware data mining. In Proc. of the ACM Int’l Conference on Knowledge Discovery and Data Mining (SIGKDD), pages 2125-2126.
Hardt, M., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning. arXiv preprint arXiv:1610.02413.
Harper, F. M. and Konstan, J. A. (2016). The movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TIIS), 5(4):19.
Kamishima, T., Akaho, S., and Sakuma, J. (2011). Fairness-aware learning through regularization approach. In Proc. of the IEEE International Conference on Data Mining Workshops, pages 643-650.
Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Ko, H., Lee, S., Park, Y., and Choi, A. (2022). A survey of recommendation systems: Recommendation models, techniques, and application fields. Electronics, 11(1):141.
Koren, Y., Bell, R., and Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8):30-37.
L. Cardoso, R., Meira Jr., W., Almeida, V., and J. Zaki, M. (2019). A framework for benchmarking discrimination-aware models in machine learning. In Proc. of the Int’l Conference on AI, Ethics, and Society (AIES), page 437-444.
Rastegarpanah, B., Gummadi, K. P., and Crovella, M. (2019). Fighting fire with fire: Using antidote data to improve polarization and fairness of recommender systems. In Proc.of the ACM Int’l Conference on Web Search and Data Mining (WSDM), pages 231-239.
Salakhutdinov, R., Mnih, A., and Hinton, G. (2007). Restricted boltzmann machines for collaborative filtering. In Proc. of the Int’l Conference on Machine learning, pages 791-798.
Schafer, J. B., Frankowski, D., Herlocker, J., and Sen, S. (2007). Collaborative filtering recommender systems. In The adaptive web, pages 291-324. Springer.
Yao, S. and Huang, B. (2017). Beyond parity: Fairness objectives for collaborative filtering. In Advances in Neural Information Processing Systems, pages 2921-2930.
