Instance hardness measures for classification and regression problems

Authors

  • Gustavo P. Torquette Universidade Federal de São Paulo
  • Victor S. Nunes UNIFESP and Instituto Tecnológico de Aeronáutica
  • Pedro Y. A. Paiva Instituto Tecnológico de Aeronáutica
  • Ana C. Lorena Instituto Tecnológico de Aeronáutica

DOI:

https://doi.org/10.5753/jidm.2024.3463

Keywords:

Data complexity, Instance Hardness, Hardness Measures, Machine Learning

Abstract

While the most common approach in Machine Learning (ML) studies is to analyze the performance achieved on a dataset through summary statistics, a fine-grained analysis at the level of its individual instances can provide valuable information for the ML practitioner. For instance, one can inspect whether the instances which are hardest to have their labels predicted might have any quality issues that should be addressed beforehand; or one may identify the need for more powerful learning methods for addressing the challenge imposed by one or a set of instances. This paper formalizes and presents a set of meta-features for characterizing which instances of a dataset are the hardest to have their label predicted accurately and why they are so, aka instance hardness measures. While there are already measures able to characterize instance hardness in classification problems, there is a lack of work devoted to regression problems. Here we present and analyze instance hardness measures for both classification and regression problems according to different perspectives, taking into account the particularities of each of these problems. For validating our results, synthetic datasets with different sources and levels of complexity are built and analyzed, indicating what kind of difficulty each measure is able to better quantify. A Python package containing all implementations is also provided.

Downloads

Download data is not yet available.

References

Arruda, J. L., Prudêncio, R. B., and Lorena, A. C. (2020). Measuring instance hardness using data complexity measures. In Brazilian Conference on Intelligent Systems, pages 483–497. Springer.

Cruz, R. M., Sabourin, R., and Cavalcanti, G. D. (2018). Dynamic classifier selection: Recent advances and perspectives. Information Fusion, 41:195–216.

Cruz, R. M., Sabourin, R., Cavalcanti, G. D., and Ren, T. I. (2015). Meta-des: A dynamic ensemble selection framework using meta-learning. Pattern recognition, 48(5):1925–1935.

Garcia, L. P., de Carvalho, A. C., and Lorena, A. C. (2015). Effect of label noise in the complexity of classification problems. Neurocomputing, 160:108–119.

Leyva, E., González, A., and Pérez, R (2014). A set of complexity measures designed for applying meta-learning to instance selection. IEEE Transactions on Knowledge and Data Engineering, 27(2):354–367.

Leyva, E., González, A., and Pérez, R (2015). Three new instance selection methods based on local sets: A comparative study with several approaches from a bi-objective perspective. Pattern Recognition, 48(4):1523 – 1537.

Lorena, A. C., De Carvalho, A. C., and Gama, J. M. (2008). A review on the combination of binary classifiers in multiclass problems. Artificial Intelligence Review, 30:19–37.

Lorena, A. C., Garcia, L. P., Lehmann, J., Souto, M. C., and Ho, T. K. (2019). How complex is your classification problem? a survey on measuring classification complexity. ACM Computing Surveys (CSUR), 52(5):1–34.

Lorena, A. C., Maciel, A. I., de Miranda, P. B., Costa, I. G., and Prudêncio, R. B. (2018). Data complexity meta-features for regression problems. Machine Learning, 107(1):209–246.

Martínez-Plumed, F., Prudêncio, R. B., Martínez-Usó, A., and Hernández-Orallo, J. (2019). Item response theory in ai: Analysing machine learning classifiers at the instance level. Artificial intelligence, 271:18–42.

Moraes, J. V., Reinaldo, J. T., Ferreira-Junior, M., Silva Filho, T., and Prudêncio, R. B (2022). Evaluating regression algorithms at the instance level using item response theory. Knowledge-Based Systems, 240:108076.

Morais, G. and Prati, R. C. (2013). Complex network measures for data set characterization. In 2013 Brazilian Conference on Intelligent Systems, pages 12–18. IEEE.

Paiva, P. Y. A., Moreno, C. C., Smith-Miles, K., Valeriano, M. G., and Lorena, A. C. (2022). Relating instance hardness to classification performance in a dataset: a visual approach. Machine Learning, 111(8):3085–3123.

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al. (2011). Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830.

Rivolli, A., Garcia, L. P., Soares, C., Vanschoren, J., and de Carvalho, A. C. (2022). Meta-features for meta-learning. Knowledge-Based Systems, page 108101.

Schweighofer, E. (2021). Data-centric machine learning: Improving model performance and understanding through dataset analysis. In Legal Knowledge and Information Systems: JURIX 2021, volume 346, page 54. IOS Press.

Smith, M. R., Martinez, T., and Giraud-Carrier, C. (2014). An instance level analysis of data complexity. Machine learning, 95(2):225–256.

Sowkarthika, B., Gyanchandani, M., Wadhvani, R., and Shukla, S. (2023). Data complexity-based dynamic ensembling of svms in classification. Expert Systems with Applications, 216:119437.

Torquette, G. P., Nunes, V. S., Paiva, P. Y., Neto, L. B., and Lorena, A. C. (2022). Characterizing instance hardness in classification and regression problems. arXiv preprint arXiv:2212.01897, Proceedings of KDMile 2022.

Vanschoren, J. (2019). Meta-learning. Automated machine learning: methods, systems, challenges, pages 35–61.

Downloads

Published

2024-02-27

How to Cite

P. Torquette, G., S. Nunes, V., Y. A. Paiva, P., & C. Lorena, A. (2024). Instance hardness measures for classification and regression problems. Journal of Information and Data Management, 15(1), 152–174. https://doi.org/10.5753/jidm.2024.3463

Issue

Section

Best Papers of KDMiLe 2022 - Extended Papers