Trust Computing Based on Argumentation Debates with Votes for Detecting Lying Agents
Abstract
In a multi-agent system (MAS), it is very usual that agents delegate tasks to each other. However, due to the subjectivity of the information used by agents during the decision-making process, an agent may end up delegating a task to an untrustworthy partner. In this work, we present a trust computing approach based on quantitative argumentation with votes (QuAD-V), where a trust measure is estimated according with the agents' opinions about the service provided by a partner. Besides, such an approach provides a mechanism to evaluate the credibility of agents that play as information sources. In our results, we demonstrate how our trust computing approach can be employed for detecting lying agents, which are able to slander or promote other agents.
References
Baroni, P., Romano, M., Toni, F., Aurisicchio, M., and Bertanza, G. (2015). Automatic evaluation of design alternatives with quantitative argumentation. Argument & Computation, 6(1):24–49.
Braga, D. D. S., Niemann, M., Hellingrath, B., and Neto, F. B. D. L. (2018). Survey on computational trust and reputation models. ACM Computing Surveys (CSUR), 51(5):1– 40.
Buccafurri, F., Comi, A., Lax, G., and Rosaci, D. (2015). Experimenting with certified reputation in a competitive multi-agent scenario. IEEE Intelligent Systems, 31(1):48– 55.
Castelfranchi, C. and Falcone, R. (1998). Towards a theory of delegation for agent-based systems. Robotics and Autonomous systems, 24(3-4):141–157.
Castelfranchi, C. and Falcone, R. (2010). Trust theory: A socio-cognitive and computational model, volume 18. John Wiley & Sons.
Castelfranchi, C. and Guerini, M. (2007). Is it a promise or a threat? Pragmatics & Cognition, 15(2):277–311.
Cayrol, C. and Lagasquie-Schiex, M.-C. (2005). On the acceptability of arguments in bipolar argumentation frameworks. In European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, pages 378–389. Springer.
Cho, J.-H., Chan, K., and Adali, S. (2015). A survey on trust modeling. ACM Computing Surveys (CSUR), 48(2):1–40.
Conte, R. and Paolucci, M. (2002). Reputation in artificial societies: Social beliefs for social order, volume 6. Springer Science & Business Media.
Conte, R. and Paolucci, M. (2003). Social cognitive factors of unfair ratings in reputation reporting systems. In Proceedings IEEE/WIC International Conference on Web Intelligence (WI 2003), pages 316–322. IEEE.
Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence, 77(2):321–357.
Griffiths, N. (2005). Task delegation using experience-based multi-dimensional trust. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, pages 489–496.
Kunz, W. and Rittel, H. W. (1970). Issues as elements of information systems, volume 131. Citeseer.
Miceli, M. and Castelfranchi, C. (2000). The role of evaluation in cognition and social interaction. Human cognition and agent technology, pages 225–262.
Pinyol, I. and Sabater-Mir, J. (2013). Computational trust and reputation models for open multi-agent systems: a review. Artificial Intelligence Review, 40(1):1–25.
Rago, A. and Toni, F. (2017). Quantitative argumentation debates with votes for opinIn International Conference on Principles and Practice of Multi-Agent ion polling. Systems, pages 369–385. Springer.
Rago, A., Toni, F., Aurisicchio, M., and Baroni, P. (2016). Discontinuity-free decision support with quantitative argumentation debates. In Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning.
Sabater, J., Paolucci, M., and Conte, R. (2006). Repage: Reputation and image among limited autonomous partners. Journal of artificial societies and social simulation, 9(2).
Sabater, J. and Sierra, C. (2001). Regret: reputation in gregarious societies. In Proceedings of the fifth international conference on Autonomous agents, pages 194–195.
Singh, R. R. (2018). Designing for multi-agent collaboration: a shared mental model perspective. PhD thesis.
Solhaug, B., Elgesem, D., and Stolen, K. (2007). Why trust is not proportional to In The Second International Conference on Availability, Reliability and Securisk. rity (ARES’07), pages 11–18. IEEE.
