Classification with Reject Option: Building Meta-models to Predict Classification Error

  • Patricia S. M. Ueda ITA
  • Maria Gabriela Valeriano ITA
  • Arthur D. Mangussi ITA
  • Ricardo B. C. Prudêncio UFPE
  • Ana C. Lorena ITA

Resumo


Instance Hardness Measures (IHMs) have been adopted in literature to evaluate and understand the difficulty in classifying specific instances within a dataset. Based on this information, the quality of a dataset can be improved at training time (e.g., by instance filtering), thereby refining the learned Machine Learning (ML) models. In this work, we take a different direction, which is to leverage IHMs at deployment time to decide whether the predictions of an ML model are reliable enough to be accepted. In the proposed solution, initially, a set of IHMs adapted in such a way that the actual labels of the query instances are not needed is extracted. This is relevant to estimate the difficulty of instances at testing time, as they are not labeled. Then, we train meta-models to identify whether an instance will be misclassified by the ML model based on the IHM values. A prediction for a test instance is rejected if the meta-model indicates that the instance will be misclassified. To evaluate the viability of the proposal, we perform experiments with two synthetic and two real-world healthcare datasets. This rejection approach improved the reliability of the ML models by increasing the accuracy for the accepted instances.
Publicado
29/09/2025
UEDA, Patricia S. M.; VALERIANO, Maria Gabriela; MANGUSSI, Arthur D.; PRUDÊNCIO, Ricardo B. C.; LORENA, Ana C.. Classification with Reject Option: Building Meta-models to Predict Classification Error. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 35. , 2025, Fortaleza/CE. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 223-238. ISSN 2643-6264.