Comparing Neural Network Encodings for Logic-Based Explainability
Resumo
Providing explanations for the outputs of artificial neural networks (ANNs) is crucial in many contexts, such as critical systems, data protection laws and handling adversarial examples. Logic-based methods can offer explanations with correctness guarantees, but face scalability challenges. Due to these issues, it is necessary to compare different encodings of ANNs into logical constraints, which are used in logic-based explainability. This work compares two encodings of ANNs: one has been used in the literature to provide explanations, while the other will be adapted for our context of explainability. Additionally, the second encoding uses fewer variables and constraints, thus, potentially enhancing efficiency. Experiments showed similar running times for computing explanations, but the adapted encoding performed up to 18% better in building logical constraints and up to 16% better in overall time.
Publicado
17/11/2024
Como Citar
CARVALHO, Levi Cordeiro; OLIVEIRA, Saulo A. F.; ROCHA, Thiago Alves.
Comparing Neural Network Encodings for Logic-Based Explainability. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 13. , 2024, Belém/PA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 281-295.
ISSN 2643-6264.