Development of an Equity Strategy for Recommendation Systems
Resumo
As a highly data-driven application, recommender systems can be affected by data distortions, culminating in unfair results for different groups of data, which can be a reason to affect system performance. Therefore, it is important to identify and resolve issues of unfairness in referral scenarios. We therefore developed an equity algorithm aimed at reducing group injustice in recommender systems. The algorithm was tested on two existing datasets (MovieLens and Songs) with two user clustering strategies. We were able to reduce group unfairness in both data sets by considering the two clustering strategies.
Referências
Barocas, S. and Selbst, A. D. (2016b). Big data’s disparate impact. California Law Review, 104(3):671–732.
Berk, R., Heidari, H., Jabbari, S., Joseph, M., Kearns, M. J., Morgenstern, J., Neel, S., and Roth, A. (2017). A convex framework for fair regression. CoRR, abs/1706.02409.
Beutel, A., Chi, E. H., Cheng, Z., Pham, H., and Anderson, J. (2017). Beyond globally optimal: Focused learning for improved recommendations. In Proceedings of the 26th International Conference on World Wide Web, WWW 2017, Perth, Australia, April 3-7, 2017.
Biega, A. J., Gummadi, K. P., and Weikum, G. (2018). Equity of attention: Amortizing individual fairness in rankings. CoRR, abs/1805.01788.
Bilal Zafar, Isabel Valera, M. G.-R. and Gummadi, K. P. (2017). Training fair classifiers. In AISTATS’17: 20th International Conference on Artificial Intelligence and Statistics, 2018.
Boyd, D. and Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5):662–679.
Burke, R., Sonboli, N., and Ordonez-Gauger, A. (2018). Balanced neighborhoods for multi-sided fairness in recommendation. In FAT.
Celis, L. E., Deshpande, A., Kathuria, T., and Vishnoi, N. K. (2016). How to be fair and diverse? CoRR, abs/1610.07183.
Cinus, F., Minici, M., Monti, C., and Bonchi, F. (2022). The effect of people recommenders on echo chambers and polarization. Proceedings of the International AAAI Conference on Web and Social Media, 16(1):90–101.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. S. (2011). Fairness through awareness. CoRR, abs/1104.3913.
Gómez, E., Shui Zhang, C., Boratto, L., Salamó, M., and Marras, M. (2021). The winner takes it all: Geographic imbalance and provider (un)fairness in educational recommender systems. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’21, page 1808–1812, New York, NY, USA. Association for Computing Machinery.
Hardt, M. (2013). On the provable convergence of alternating minimization for matrix completion. CoRR, abs/1312.0925.
Hardt, M., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning. CoRR, abs/1610.02413.
Harper, F. M. and Konstan, J. A. (2015). The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):1–19.
Hastie, T., Mazumder, R., Lee, J., and Zadeh, R. (2014). Matrix completion and low-rank svd via fast alternating least squares.
Kamishima, T. and Akaho, S. (2017). Considerations on recommendation independence for a find-good-items task. In In 11th ACM Conference on Recommender Systems.
Kamishima, T., Akaho, S., and Asoh, H. (2012). Enhancement of the neutrality in recommendation. In In Proc. of the 2nd Workshop on Human Decision Making in Recommender Systems, pages 8–14.
Kamishima, T., Akaho, S., Asoh, H., and Sakuma, J. (2018). Recommendation independence. In Friedler, S. A. and Wilson, C., editors, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 187–201. PMLR.
Kumar, D., Grosz, T., Rekabsaz, N., Greif, E., and Schedl, M. (2023). Fairness of recommender systems in the recruitment domain: an analysis from technical and legal perspectives. Frontiers in Big Data, 6.
Li, Y., Chen, H., Fu, Z., Ge, Y., and Zhang, Y. (2021). User-oriented fairness in recommendation. In Proceedings of the Web Conference 2021, WWW ’21. ACM.
Mehrotra, R., McInerney, J., Bouchard, H., Lalmas, M., and Diaz, F. (2018). Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness & satisfaction in recommendation systems. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, New York, NY, USA. Association for Computing Machinery.
Niemiec, W., Borges, R., and Barone, D. (2022). Artificial intelligence discrimination: how to deal with it? In Anais do III Workshop sobre as Implicações da Computação na Sociedade, pages 93–100, Porto Alegre, RS, Brasil. SBC.
Rastegarpanah, B., Gummadi, K. P., and Crovella, M. (2018). Fighting fire with fire: Using antidote data to improve polarization and fairness of recommender systems. CoRR, abs/1812.01504.
Ruback, L., Avila, S., and Cantero, L. (2021). Vieses no aprendizado de máquina e suas implicações sociais: Um estudo de caso no reconhecimento facial. In Anais do II Workshop sobre as Implicações da Computação na Sociedade, pages 90–101, Porto Alegre, RS, Brasil. SBC.
Sweeney, L. (2013). Discrimination in online ad delivery.
Tang, J., Shen, S., Wang, Z., Gong, Z., Zhang, J., and Chen, X. (2023). When fairness meets bias: a debiased framework for fairness aware top-n recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys ’23, page 200–210, New York, NY, USA. Association for Computing Machinery.
Taso, F., Reis, V., and Martinez, F. (2023). Discriminação algorítmica de gênero: Estudo de caso e análise no contexto brasileiro. In Anais do IV Workshop sobre as Implicações da Computação na Sociedade, pages 13–25, Porto Alegre, RS, Brasil. SBC.
Wang, B. and Gong, N. Z. (2018). Stealing hyperparameters in machine learning. CoRR, abs/1802.05351.
Wang, Y., Ma, W., Zhang, M., Liu, Y., and Ma, S. (2023). A survey on the fairness of recommender systems. ACM Trans. Inf. Syst., 41(3).
Yao, S. and Huang, B. (2017). Beyond parity: Fairness objectives for collaborative filtering. CoRR, abs/1705.08804.
Zafar, M., Valera, I., Rodriguez, M., and Gummadi, K. P. (2017). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, pages 1171–1180.
Zehlike, M., Sühr, T., Baeza-Yates, R., Bonchi, F., Castillo, C., and Hajian, S. (2022). Fair top-k ranking with multiple protected groups. Information Processing & Management, 59(1):102707.
Zemel, R., Wu, Y., Swersky, K., Pitassi, T., and Dwork, C. (2013). Learning fair representations. In Dasgupta, S. and McAllester, D., editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 325–333, Atlanta, Georgia, USA. PMLR.