AHP-Gaussian To Enhance Model Selection Based On Multiple Fairness Criteria
Resumo
The challenge of developing impartial models that minimize the propagation of unfair predictions is directly linked to optimizing multiple fairness concepts. Therefore, identifying which model best combines these concepts is essential for promoting fairness in machine learning. The field of Multi-Criteria Decision Analysis addresses similar issues by developing techniques for choosing the best alternative in complex problems. One standout method is AHP–Gaussian, which, through the Gaussian factor, defines the relevance of each criterion used in decision-making. This eliminates any human factor in weighing the criteria’s importance, making it an excellent alternative in the fairness-aware model selection task. To the extent of our knowledge, no study in the literature has proposed this approach before. This paper handles this gap and proposes applying AHP–Gaussian to select fairer models in classification tasks involving people. According to the results, AHP–Gaussian is more effective at selecting classifiers that balance predictive power and maximization of distinct fairness concepts than traditional multi-criteria methods.
Referências
Amrieh, E. A., Hamtini, T., and Aljarah, I. (2015). Preprocessing and analyzing educational data set using X-API for improving student’s performance. In IEEE AEECT, pages 1–5. IEEE.
Aruldoss, M., Lakshmi, T. M., and Venkatesan, V. P. (2013). A survey on multi criteria decision making methods and its applications. American Journal of Information Systems, 1(1):31–43.
Barocas, S., Hardt, M., and Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press.
Black, E., Raghavan, M., and Barocas, S. (2022). Model multiplicity: Opportunities, concerns, and solutions. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 850–863.
Calmon, F., Wei, D., Vinzamuri, B., Natesan Ramamurthy, K., and Varshney, K. R. (2017). Optimized pre-processing for discrimination prevention. Advances in Neural Information Processing Systems, 30:3992–4001.
Caton, S. and Haas, C. (2020). Fairness in machine learning: A survey. ACM Computing Surveys.
Celis, L. E., Huang, L., Keswani, V., and Vishnoi, N. K. (2019). Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 319–328.
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163.
Darko, A., Chan, A. P. C., Ameyaw, E. E., Owusu, E. K., Pärn, E., and Edwards, D. J. (2019). Review of application of analytic hierarchy process (ahp) in construction. International journal of construction management, 19(5):436–452.
Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res., 7:1–30.
Dos Santos, M., de Araújo Costa, I. P., and Gomes, C. F. S. (2021). Multicriteria decision-making in the selection of warships: a new approach to the ahp method. International Journal of the Analytic Hierarchy Process, 13(1).
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226.
Hardt, M., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in neural information processing systems, 29:3315–3323.
Kelly, M., Longjohn, R., and Nottingham, K. (2017). The UCI machine learning repository.
Larson, J., Mattu, S., Kirchner, L., and Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm.
Mavrogiorgos, K., Kiourtis, A., Mavrogiorgou, A., Menychtas, A., and Kyriazis, D. (2024). Bias in machine learning: A literature review. Applied Sciences, 14(19).
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6):1–35.
Minatel, D., da Silva, A. C. M., dos Santos, N. R., Curi, M., Marcacini, R. M., and de Andrade Lopes, A. (2023a). Data stratification analysis on the propagation of discriminatory effects in binary classification. In Anais do XI Symposium on Knowledge Discovery, Mining and Learning, pages 73–80. SBC.
Minatel, D., dos Santos, N. R., da Silva, A. C. M., Cúri, M., Marcacini, R. M., and Lopes, A. d. A. (2023b). Unfairness in machine learning for web systems applications. In Proceedings of the 29th Brazilian Symposium on Multimedia and the Web, pages 144–153.
Minatel, D., dos Santos, N. R., da Silva, V. F., Cúri, M., and de Andrade Lopes, A. (2023c). Item response theory in sample reweighting to build fairer classifiers. In Annual International Conference on Information Management and Big Data, pages 184–198. Springer.
Minatel, D., Parmezan, A. R., Cúri, M., and de A. Lopes, A. (2023d). Dif-sr: A differential item functioning-based sample reweighting method. In Iberoamerican Congress on Pattern Recognition, pages 630–645. Springer.
Minatel, D., Parmezan, A. R., Cúri, M., and Lopes, A. D. A. (2023e). Fairness-aware model selection using differential item functioning. In 2023 International Conference on Machine Learning and Applications (ICMLA), pages 1971–1978. IEEE.
Mishler, A., Kennedy, E. H., and Chouldechova, A. (2021). Fairness in risk assessment instruments: Post-processing to achieve counterfactual equalized odds. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 386–400.
Narasimhan, H. (2018). Learning with complex loss functions and constraints. In International Conference on Artificial Intelligence and Statistics, pages 1646–1654.
Parmezan, A. R. S., Lee, H. D., and Wu, F. C. (2017). Metalearning for choosing feature selection algorithms in data mining: Proposal of a new framework. Expert Systems with Applications, 75:1–24.
Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., and Weinberger, K. Q. (2017). On fairness and calibration. Advances in Neural Information Processing Systems, 30:5680–5689.
Podvezko, V. (2009). Application of ahp technique. Journal of Business Economics and management, (2):181–189.
Saaty, T. L. (2008). Decision making with the analytic hierarchy process. International journal of services sciences, 1(1):83–98.
Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. (2013). OpenML: Networked science in machine learning. SIGKDD Explor., 15(2):49–60.
Zhang, B. H., Lemoine, B., and Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. In AAAI/ACM Conference on AI, Ethics, and Society, pages 335–340.
Zhang, Y., Bellamy, R., and Varshney, K. (2020). Joint optimization of ai fairness and utility: a human-centered approach. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 400–406.