Faithfully Explaining Predictions of Knowledge Embeddings
Resumo
Knowledge embeddings are key ingredients of advanced question-answering and recommender systems. Even though their predictions are accurate, they are rather hard to interpret by human users; interpretability techniques are needed so as to provide meaningful human-friendly explanations for prediction generated by embeddings. We propose a novel model-agnostic method inspired by local surrogate approaches that generates faithful explanations for knowledge embedding predictions.
Referências
Bordes, A., Usunier, N., Garcı́a-Durán, A., and Weston, J. (2013). Translating Embdings for Modeling Multi-Relational Data. Advances in Neural Information Processing Systems, pages 2787–2795.
Gardner, M. and Mitchell, T. (2015). Efficient and Expressive Knowledge Base Completion Using Subgraph Feature Extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1488–1498. Association for Computational Linguistics.
Gusmão, A. C., Correia, A. C., De Bona, G., and Cozman, F. G. (2018). Interpreting Embedding Models of Knowledge Bases : A Pedagogical Approach. In 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), number Whi, pages 79–86.
He, R., Kang, W.-C., and McAuley, J. (2017). Translation-based recommendation. In Proceedings of the Eleventh ACM Conference on Recommender Systems, RecSys ’17, pages 161–169, New York, NY, USA. ACM.
Huang, X., Zhang, J., Li, D., and Li, P. (2019). Knowledge graph embedding based question answering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM ’19, pages 105–113, New York, NY, USA. ACM.
Kazemi, S. M. and Poole, D. (2018). Simple embedding for link prediction in knowledge graphs. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31, pages 4284–4295. Curran Associates, Inc.
Lao, N., Mitchell, T., and Cohen, W. W. (2011). Random Walk Inference and Learning in A Large Scale Knowledge Base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 529—-539, Edinburgh, United Kingdom. Association for Computational Linguistics.
Murphy, B., Talukdar, P., and Mitchell, T. (2012). Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding, pages 1933–1950. The COLING 2012 Organizing Committee, Mumbai, India.
Nickel, M., Murphy, K., Tresp, V., and Gabrilovich, E. (2016). A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 1135–1144, New York, NY, USA. ACM.
W3. RDF 1.1 Concepts and Abstract Syntax.
Wang, Q., Mao, Z., Wang, B., and Guo, L. (2017). Knowledge Graph Embedding : A Survey of Approaches and Applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743.
Wang, Y., Ruffinelli, D., Gemulla, R., Broscheit, S., and Meilicke, C. (2019). On evaluating embedding models for knowledge base completion. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 104–112, Florence, Italy. Association for Computational Linguistics.
Wang, Z., Zhang, J., Feng, J., and Chen, Z. (2014). Knowledge Graph Embedding by Translating on Hyperplanes. In Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1112, 1119.
Xie, Q., Ma, X., Dai, Z., and Hovy, E. H. (2017). An interpretable knowledge transfer model for knowledge base completion. In ACL.
Zhang, W., Paudel, B., Zhang, W., Bernstein, A., and Chen, H. (2019). Interaction embeddings for prediction and explanation in knowledge graphs. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM ’19, pages 96–104, New York, NY, USA. ACM.