Transfer Learning by Mapping and Revising Boosted Relational Dependency Networks
Statistical machine learning algorithms usually assume that there is considerably-size data to train the models. However, they would fail in addressing domains where data is difficult or expensive to obtain. Transfer learning has emerged to address this problem of learning from scarce data by relying on a model learned in a source domain where data is easy to obtain to be a starting point for the target domain. On the other hand, real-world data contains objects and their relations, usually gathered from noisy environment. Finding patterns through such uncertain relational data has been the focus of the Statistical Relational Learning (SRL) area. Thus, to address domains with scarce, relational, and uncertain data, in this paper, we propose TreeBoostler, an algorithm that transfers the SRL state-of-the-art Boosted Relational Dependency Networks learned in a source domain to the target domain. TreeBoostler first finds a mapping between pairs of predicates to accommodate the additive trees into the target vocabulary. After, it employs two theory revision operators devised to handle incorrect relational regression trees aiming at improving the performance of the mapped trees. In the experiments presented in this paper, TreeBoostler has successfully transferred knowledge among several distinct domains. Moreover, it performs comparably or better than learning from scratch methods in terms of accuracy and outperforms a transfer learning approach in terms of accuracy and runtime.
Getoor, L. and Taskar, B. (2007). Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning). The MIT Press.
Kumaraswamy, R., Odom, P., Kersting, K., Leake, D., and Natarajan, S. (2015). Transfer learning via relational type matching. In Proceedings of the 2015 IEEE International Conference on Data Mining (ICDM), ICDM ’15, pages 811-816. IEEE Computer Society.
Mihalkova, L., Huynh, T., and Mooney, R. J. (2007). Mapping and revising markov logic networks for transfer learning. In Proceedings of the 22Nd National Conference on Artificial Intelligence-Volume 1, AAAI’07, pages 608-614. AAAI Press.
Mihalkova, L. and Mooney, R. (2009). Transfer learning from minimal target data by mapping across relational domains. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI-09), pages 1163-1168, Pasadena, CA.
Natarajan, S., Khot, T., Kersting, K., Gutmann, B., and Shavlik, J. (2012). Gradient-based boosting for statistical relational learning: The relational dependency network case. Mach. Learn., 86(1):25-56.
Neville, J. and Jensen, D. D. (2007). Relational dependency networks. Journal of Machine Learning Research, 8:653-692.
Paes, A., Zaverucha, G., and Costa, V. S. (2017). On the use of stochastic local search techniques to revise first-order logic theories from examples. Machine Learning, 106(2):197-241.
Pan, S. J. and Yang, Q. (2010). A survey on transfer learning. IEEE Trans. on Knowl. and Data Eng., 22(10):1345-1359.
Richardson, M. and Domingos, P. (2006). Markov logic networks. Mach. Learn., 62(1-2):107-136.
Van Haaren, J., Kolobov, A., and Davis, J. (2015). TODTLER: Two-order-deep transfer learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI Conference on Artificial Intelligence.
Wrobel, S. (1996). First order theory refinement. In Advances in inductive logic programming. IOS Press.