Meta-Information and Argumentation in Multi-Agent Systems

In this work we describe our research regarding meta-information, such as how much a piece of information is trustworthy or when and where from it was acquired, in the context of multi-agent systems. In particular, we describe different profiles representing different attitudes in considering meta-information about trust, time, and expertise in agent’s decisionsmaking and reasoning processes. Furthermore, we describe how we have combined such different meta-information available in multi-agent systems with an argumentation-based reasoning mechanism. In our approach, agents are able to resolve more conflicts between information/arguments, given that they are able to use different meta-information to define preferences and to decide between such conflicting information. Our framework for meta-information in multi-agent systems was implemented on a modular architecture, thus other meta-information can be added, as well as different meta-information can be combined in order to create new agent profiles. Therefore, in our approach, different profiles can be instantiated in different application domains, allowing flexibility in the choice of how agents will deal with conflicting information in those particular domains.


Introduction
In multi-agent systems, agents are computational entities with autonomous behaviour (i.e., they are able to make decisions and act without direct human intervention on unexpected circumstances).These computational entities are situated in an environment that they are able to sense (through sensors), act upon it (through effectors), and communicate with each other through message passing [Wooldridge 2009].Fundamentally, multi-agent systems are developed to solve complex and distributed problems.Towards such solutions, one of the most important aspects of such systems is communication, given that agents will need to communicate with others in order to coordinate their activities.Among the communication techniques in multi-agent systems, argumentation-based approaches have received much research attention over the years, perhaps because they provide the exchange of additional information in dialogues for negotiation, deliberation, and many other important aspects of multi-agent systems [Panisson et al. 2015, Parsons and McBurney 2003, Parsons et al. 2002].This exchange of information allows agents to communicate and understand each other in a more informed way.It also can change the mental attitudes of the agents who receive such information.This is an important aspect of argumentation-based approaches to multi-agent systems, because of the inherent uncertainty and lack of information in these systems [Wooldridge 2009].Further, recent work have brought argumentation-based reasoning to the context of agent-oriented programming languages [Berariu 2014, Panisson et al. 2014, Panisson and Bordini 2016].In that context, agents can reason about arguments in order to make decisions and communicate (i.e., they consider arguments for and against a particular conclusion in order to decide if it is acceptable or not).Also, they can construct arguments in the face of uncertainty (i.e., incomplete and incorrect information).However, it is important that the agents construct arguments using the most precise pieces of information available to them, based on the most trustworthy sources, avoiding as much as possible sources of doubt in the arguments used, hence improving their decisions and therefore their actions.
With these issues in mind, we propose an approach to combine argumentationbased reasoning and different meta-information available in multi-agent systems.For example, using information about trust on the agents who provided some information used in an argument.This allows agents to make decisions in situations where they would not have been able, for example, because of unresolved conflicts in argumentation-based reasoning mechanisms without such meta-information.This is interesting, as not resolving conflicts between arguments can be unsatisfactory in general, specially in multi-agent systems where efficient ways of solving conflicts are typically required [Amgoud and Ben-Naim 2015].Differently from previous approaches, we here consider that an agent might have various different sources for the same piece of information.This is in fact often the case in multi-agent systems developed on agent-programming platforms, such as Jason [Bordini et al. 2007], where beliefs are annotated with all known sources of that information.Furthermore, elaborated trust systems have been studied [Pinyol and Sabater-Mir 2013] in the context of multi-agent systems which could provide reliable trust information about each such source.Also, we introduce a modular framework we implemented in Jason1 , which combines argumentation-based reasoning and the kinds of meta-information discussed in this paper.Our approach allows different agent attitudes being defined through profiles, describing how agents consider meta-information differently (i.e., micro profiles), or combine meta-information differently (i.e., macro profiles), depending on the need for particular applications domain.
In multi-agent systems, different reasoning and decision-making mechanisms may be implemented depending on the application domain, for example, task reallocation in cooperative groups of humans [Schmidt et al. 2016].Furthermore, different reasoning and decision-making mechanisms can be combined with different meta-information available in multi-agent platforms, in order to consider the most relevant information the agents have.Thus, agents are able to make better decisions and to reach more precise conclusions in their reasoning processes.For example, our recent work, presented in [Melo et al. 2016a, Melo et al. 2016c, Panisson et al. 2016], shows how an argumentation-based reasoning mechanism can be combined with meta-information about trust in order to solve more conflicts between arguments, considering the trust agents have on the sources that the information used in each argument come from.Also, in [Melo et al. 2016c], it is proposed to use other meta-information, when trust on the sources is not enough to solve a conflict between information (including arguments).Those papers show that the use of meta-information in reasoning and decision-making mechanisms is modular, and new modules, containing different or combined meta-information, can be added when it is appropriate for the application domain.
In this work, we present the overall results of a research project on the use of meta-information in agent decision making.As extension from previous works, we present the definition of micro and macro profiles, showing how they are combined in Section 7, where we present an agent profile called sceptical dynamic agent as example.Besides, differently from our approach for time, presented in [Melo et al. 2016c], in which time was presented as a criterion to help in reasoning about trust, here we analyse and present a deeper approach, showing how it can be applied independently of other meta-information.In the approach for time presented here, we define concepts such as the relevance of a belief, and we present two examples of agent profiles that consider only time: dynamic and conservative agents.We also explore how trust and time can work together.Then, we define two ways to make these approaches coexist in an agent reasoning system: the functions trbt and trat, defined in section 5.3, are extensions of our previous work in which we only discussed in passing that there was a possible relation between trust and time.Here we define in concrete terms what this relation is, which makes it easier to implement in applications of multi-agent system.
The main contributions of this work are: (i) we describe the argumentation-based reasoning mechanism that uses meta-information to support a decision when there are arguments supporting contrary conclusions; (ii) we present different meta-information we explored, as well as we define different profiles for each one of them.In this context, different profiles implement different agents attitudes which describe how agents consider the meta-information available for them in their decision-making and reasoning process; (iii) we illustrate our approach using a stock market scenario, showing how considering different meta-information is useful to decide between different opinions about different investment.

Argumentation-Based Reasoning
In this work, we focus on extending argumentation-based approaches with different metainformation available in multi-agent systems, in order to provide a sort of preference between arguments in such framework.In particular, we extend the argumentation-based reasoning mechanism reported in [Panisson andBordini 2016, Panisson et al. 2014], which is one of the few practical approaches implementing argumentation-based reasoning into agent-oriented programming languages2 ; further, that reasoning mechanism is implemented in Jason [Bordini et al. 2007], a well-known multi-agent platform for the development of multi-agent systems.The argumentation-based approach presented in [Panisson and Bordini 2016] is one branch of the so-called "structured argumentation" approaches, in which, differently from abstract argumentation [Dung 1995], arguments are treated with a particular structure, and it is such structure that provides us the different attack relations between arguments.Following the definitions presented in [Panisson and Bordini 2016], arguments are constructed through strict and defeasible inferences rules.Intuitively, strict rules are considered stronger than defeasible rules.The strict part of any knowledge base is assumed to be consistent (i.e., contradictions cannot be derived from it).By this approach, there are two types of arguments to be considered: (i) strict arguments that are formed only of facts and strict rules (i.e., indisputable knowledge); and (ii) defeasible arguments that are formed using at least one defeasible rule.In order to define the acceptability of an argument Arg1, it must be verified if there is another argument Arg2 that attacks Arg1.If exists an Arg2, then there is a conflict between such arguments, and conflicts between arguments can be of two types 3 [Panisson and Bordini 2016]: Definition 1 (Attack Between Arguments) Let S i , c i and S j , c j be two arguments.Attacks between arguments can be generalised into two types: • The argument S i , c i rebuts the argument S j , c j if c i ≡ c j .
• The argument S i , c i undercuts the argument S j , c j if c i ≡ c k for some S k , c k where S k ⊆ S j .
When an argument Arg i attack another Arg j , this does not necessarily mean that Arg i defeats Arg j .Defeat is a "successful" attack, considering the set of arguments that defends the attacked one, including preferences between the conflicting arguments [Walton et al. 2008].In [Panisson and Bordini 2016], the set of acceptable arguments from an agent's belief base is defined in terms of the defeasible semantics introduced in [Governatori et al. 2004].The defeasible semantics is similar to the grounded semantics from Dung's work [Dung 1995] and it is based on the so-called preempting defeaters [Nute 1993].The preempting defeaters of [Nute 1993] are called ambiguity blocking (in regards to the argumentation system) in [Governatori et al. 2004].This means that defeasible rules that are attacked by a superior rule cannot be used to attack other rules.An example of preempting defeaters is the knowledge base represented by ∆ below, where we use ⇒ to refer to defeasible inferences: There can be extracted different inference relations from ∆. Considering the inferences, some arguments extracted from ∆ are Arg 1 , Arg 2 , Arg 3 and Arg 4 , where x ⇒ e, e ⇒ ¬c}, ¬c , Arg 3 = {x, x ⇒ e}, e (a sub-argument of Arg 2 ), and Arg 4 = {y, y ⇒ ¬e}, ¬e .This way, we may conclude d based on Arg 1 , although Arg 2 attacks the premise {a, a ⇒ b, b ⇒ c}, c present in Arg 1 (undercutting Arg 1 ).By the other hand, Arg 2 is undercutted by Arg 4 .This would prevent Arg 2 to undercut Arg 1 , allowing the conclusion of d.Considering the arguments Arg 4 and Arg 3 , one rebuts each other.This relation can be seen on Figure 1.A cycle can be seen in Figure 1, and as there is no other acceptable argument 4 attacking Arg 3 or Arg 4 , the approach presented in [Panisson and Bordini 2016], as other approaches, is not able to decide which one is acceptable, i.e., both are treated as unacceptable.An approach to deal with unsolved conflicts in argumentation-based mechanism is to use preference over the arguments.
Clearly, strict arguments are stronger than defeasible arguments, i.e., when arguments are involved in a conflict, strict arguments always defeat defeasible ones.Considering only defeasible arguments, the work in [Panisson and Bordini 2016] considers two types of priority: (i) priority by specificity, which is originally defined in defeasible logic [Nute 1994], and (ii) the explicit declaration of priority between defeasible rules, using a special predicate.In priority by specificity, more specific conclusions have priority over more general ones.To exemplify this idea, consider the well-known Tweety example: def_rule(flies(X),bird(X)).def_rule(¬flies(X),penguin(X)).def_rule(bird(X),penguin(X)).penguin(tweety).
All clauses in the Tweety example are defeasible rules (written using the representation of defeasible rules in Jason platform [Bordini et al. 2007] as in [Panisson and Bordini 2016]).Considering the knowledge above, we have two conflicting arguments, one supporting that Tweety flies: "Tweety flies, because it is a penguin, penguins are birds, and birds fly", and one supporting that Tweety does not fly: "Tweety does not fly, because it is a penguin and penguins do not fly".The mechanism implemented in [Panisson and Bordini 2016] (as well as the defeasible-Prolog [Nute 1993]) concludes, in this case, that Tweety does not fly, because the rule for penguins is more specific than a rule for birds, given that penguin is a subclass of birds.In this manner, the argument for Tweety not flying has priority over the other, and so defeats it.Considering the explicit declaration of priority, [Panisson and Bordini 2016] allows to declare that a Rule1 has priority over Rule2 (using the predicate sup(Rule1, Rule2)).Therefore, when two arguments are constructed using these rules, and they are in conflict, this declaration is used in order to decide which conclusion will actually be derived.
Although the approach presented by [Panisson and Bordini 2016] has ways to deal with conflicting information (conflicting arguments) when the conflict cannot be resolved considering the set of arguments, this characteristic is limited.This limitation can be substantially circumvented when we consider preferences over the arguments, generating fewer unresolved conflicts.Such preferences may come from information typically available in multi-agent systems, what we call meta-information, such as trust and time values for the information perceived.With this approach, in the case of unsolved conflicts, the agent may decide for the information received, for example, from the most reliable source, from an expert in that subject, or it can decide for the most recent information.
Following this idea, we propose to extend the preference relations described above, allowing agents to consider different meta-information available in the environment, according to the domain, to increase their reasoning, allowing them to solve conflicts that could be unsolved otherwise.

Meta-information as Reasoning Extension
In multi-agent systems, agents make decisions based on the information available to them.There are cases when the agents receive conflicting information, and the approach based on argumentation, presented in Section 2, cannot solve all such conflicts.When a conflict cannot be solved, this might hinder the decision-making process about how the agent should act.In our approach, agents can combine different meta-information to help solve such conflicts.
A meta-information is, basically, an information that describes another information.For example, for the information ϕ, a meta-information could be a function f (ϕ) that returns a value related to the source, structure or another characteristic of ϕ.There are many different meta-information available in the environment.If we consider a set M I = {m 1 , m 2 , ..., m n } of different meta-information, for an academic environment, m 1 may seem interesting to be considered, while into a financial environment, m 2 and m 3 are more appropriated and m 1 could not be necessary.In other words, a meta-information is suitable according to the domain, and each multi-agent system can consider its own meta-information, in its own way.
Some examples of meta-information are: trust, time, expertise and groups.Trust is useful when an agent Ag i receives conflicting information ϕ and ϕ from different sources s 1 and s 2 , respectively, but s 1 is more reliable to Ag i .This way, with trust, Ag i can decide for ϕ.Time is used when Ag i wants to prioritise the most recent or oldest information received, and it is useful for dynamic environments.Expertise is suitable for environments where each source has its own expertise.For example, a medical agent Ag j should be more reliable according to medical information ϕ when it is in conflict with other information ψ received from a non medical source Ag k .Finally, in multi-agent system, agents are grouped into virtual organisations, which can have their own interests in the multiagent system.Therefore, considering such group of agents is an interesting approach for many applications, given that such agents (representing a virtual organisation) could have a similar behaviour and could have a similar reputation.For example, considering two organisations representing two companies, agents of each company will act towards the best business results for their own company.Thus, when an agent Ag i receives conflicting information from those two companies, Ag i is able to consider the reputation of the companies in order to decide the conflict, or yet, combining the trust on the Ag j , that the information came from, and the reputation of the company that Ag j play some role.
Besides different meta-information, each agent can consider a meta-information on its own way.An example is when, considering the meta-information of time, an agent prioritises the most recent information while another prioritises the oldest ones.We named this particular approach for a meta-information as micro profile, and it defines how an agent will, internally, consider a particular meta-information.We can say that an agents profile is composed of different micro profiles, each one for a particular meta information, and a macro profile, that defines the set of meta-information to be considered by the agent.For example, an agent Ag i macro profile could be {trust, time}, representing that Ag i considers trust and time as meta-information in case of conflicts.In this example, trust and time are, separately, micro profiles.
In the next sections, we present our approach for three types of meta-information: trust, time, and expertise.Differently from our previous work [Melo et al. 2016b, Melo et al. 2016c,Melo et al. 2016a,Panisson et al. 2016,Parsons et al. 2011,Tang et al. 2011], such meta-information are first discussed independently, i.e., micro profiles are defined and, after that, we discuss how micro profiles can be combined in order to define macro profiles.We define some interesting profiles in this work, and other profiles for metainformation can be developed, and each of them can be considered from different points of view, depending on the domain.

Trust Approach
Trust, the first meta-information considered here, is a well-known approach to help agents to decide about what to believe in case of conflicting information, and there are many works about trust in the literature [Panisson et al. 2016, Melo et al. 2016a, Melo et al. 2016b, Parsons et al. 2012a, Tang et al. 2011].In trust-based approaches, agents can use the level of trust associated with the sources of contradictory information in order to decide about which one to believe.The definitions about trust, present here, were built based on [Parsons et al. 2011, Tang et al. 2011] and our previous work [Panisson et al. 2016, Melo et al. 2016b, Melo et al. 2016a].
To exemplify the use of meta-information, we consider a stock market scenario, where we show how the use of different meta-information can increase the agent's decision.First, consider a stock market scenario where the agent sh does not consider any meta-information on its reasoning.Figure 2 presents a case where sh, considering only argumentation-based reasoning, cannot decide a market to invest, possibly wasting an investment opportunity.Considering trust, the agent sh could make a decision in the case of Figure 2. To allow him to consider it, let us first understand how trust is applied in our approach.In [Tang et al. 2011,Parsons et al. 2011], the authors present the definition for trust as a relation between agents.We follow the work present in [Panisson et al. 2016, Melo et al. 2016a, Melo et al. 2016c, Melo et al. 2016b], in which trust is seen as a relation between an agent and the possible sources of information.In this case, an agent can have different trust levels for other agents, perceptions from environment, artifacts and "mental notes" (beliefs created by the agent itself).From this approach, we formalise a trust relation as: τ ⊆ Ags × Srcs, where the existence of the relation indicates that an agent assigns some level of trust to a source.For example, τ (Ag i , s j ) means that agent Ag i has at least some trust on source s j .It is important to realise that it is not a symmetric relation.So, if s j is another agent (let us say, Ag j ), the existence of τ (Ag i , Ag j ) does not mean that exists a τ (Ag j , Ag i ) relation.
A trust network is a direct graph representing trust relations within a multi-agent system.It can be defined as: Γ = Srcs, {τ } , where Srcs is the set of nodes in the graph, representing the sources of the trust network, and τ is the set of edges, where each edge is a pairwise trust relation between an agent in Srcs and another source in Srcs.An example of a trust network can be seen in Figure 3, where the ellipses represent agents and the rectangles represent artifacts.The example in Figure 3 presents three important information about our approach for using trust within multi-agent systems.First, the discussion whether transitivity (or indirect trust) should be considered or not.Many authors have questioned whether transitivity can be applied to trust [Falcone and Castelfranchi 2010], and different models, using or not transitivity, have been developed [Lu et al. 2009].In fact, transitivity can be interesting to a multi-agent system, but it can be dangerous and complex too, what explains why there are different works focused on this particular property [Falcone andCastelfranchi 2010, Demolombe 2011].Differently from [Tang et al. 2011, Parsons et al. 2012b], we do not assume trust as a transitivity relation, but assume it as a domain dependent property.The second information is related to the variables x and y.Assuming that transitivity is acceptable in the system, x and y represent that these values depend on the profile of agents, in our example, the agent John.For example, the profile of John may not consider indirect trust, meaning that x = y = null.On the other hand, if John consider indirect trust, and if it is a sceptical agent, for example, he could set these values to the lowest trust value in the path from John to the source (Alanis, for y, or W eatherApp, for x).In this case, John would set x = 0.8 and y = 0.7.Finally, the third information is about the trust values, that varies from zero to one.For example, T om trusts 0.0 in M ary, this means that T om does not trust in M ary at all.On the other hand, if John trusts 1.0 in Alanis, this means that Alanis is as reliable as possible to John.In order to measure trust, we follow the definition given in [Parsons et al. 2011, Tang et al. 2011], but adapting the function tr : Ags × Ags → R to become the function tr : Ags × Srcs → R, returning a value between 0 and 1.Also, differently from [Parsons et al. 2011, Tang et al. 2011], we define the relation between tr and τ as: where a trust level can in fact be zero, represented by tr (Ag i , s j ) = 0.This is different from cases where Ag i has no trust value assigned to s j , represented by tr (Ag i , s j ) = null.Both cases can be seen in Figure 3, where we have tr (T om, M ary) = 0 and tr (John, T om) = null.

John
Considering transitivity as a possible relation in the multi-agent system, we can formalise some properties.An agent Ag i trusts another agent Ag j directly if tr(Ag i , Ag j ) = null.Indirectly trust occurs when, continuing the previous example, Ag j has some trust level on an agent Ag k : in this case, we could say that Ag i indirectly trusts Ag k .We say there is a path between agents Ag 0 and Ag n if it is possible to create a sequence of nodes Ag 0 , Ag 1 , Ag 2 , . . ., Ag n−1 , Ag n so that τ (Ag 0 , Ag 1 ), τ (Ag 1 , Ag 2 ), . .., τ (Ag n−1 , Ag n ).In order to measure the trust from a particular path from Ag 0 to Ag n we need to use an operator to consider all direct trust values in the path.Following the idea proposed in [Parsons et al. 2011], we can have a general operator ⊗ tr , applied in tr (Ag 0 , Ag n ) = tr(Ag 0 , Ag 1 )⊗ tr . ..⊗ tr tr(Ag n−1 , Ag n ).This way, ⊗ tr set the trust value that Ag 0 has on Ag n according to the path between them.If there are m different paths from Ag 0 to Ag n , assuming the trust value of the first path as tr (Ag 0 , Ag n ) 1 and the trust value of the last path as tr (Ag 0 , Ag n ) m , following [Parsons et al. 2011], we can define a generic operator ⊕ tr as tr (Ag 0 , Ag n ) = tr (Ag 0 , Ag n ) 1 ⊕ tr . . .⊕ tr tr (Ag 0 , Ag n ) m .For simplicity, here we assume that: (i) the ⊗ tr operator is defined as the minimal trust value in the path: tr (Ag 0 , Ag n ) = min{tr (Ag 0 , Ag 1 ), . . ., tr (Ag n−1 , Ag n )}, and (ii) the ⊕ tr operator is defined as tr(Ag 0 , Ag n ) = max{tr (Ag 0 , Ag n ) 1 , . . ., tr (Ag 0 , Ag n ) m }, where m is the number of different paths from Ag 0 to Ag n .

Trust on beliefs
The objective of using trust into the agent's reasoning is to decide between conflicting information/beliefs acquired.In this section, we present how the trust on a belief ϕ can be calculated, based on the sources of ϕ.We assume that the trust value in other agents are explicitly declared into the agent's belief base (but dynamically calculated) based on the previous approach, while trust values of information acquired from the environment will depend on the application domain, as, for example, multiple sensors can have different trust values associated to them.To exemplify the ideas introduced in this section, consider the values in Table 1, representing the trust that an agent Ag i has on different sources: the trust value that Ag i has on the sources of ϕ.The function trb is calculated according to the agent profile, representing that each agent has its own way to determine how it trusts on the sources and how much the trust on sources impacts the trust on the information.Here we describe two agent micro profiles for trust, based on our previous work [Melo et al. 2016a, Melo et al. 2016c], where each profile has its own way to calculate trb: Definition 2 (Credulous Agent) A credulous agent considers only the most trustworthy source of information, and does not look for an overall social value.
Here, social value is about the number of different sources telling the same information.This represents that the credulous agent does not care about this number, considering only the most trustworthy source.The formula used by a credulous agent to consider the most trusted source is trb i (ϕ) = max{tr (Ag i , s 1 ), ..., tr (Ag i , s n )}, where {s 1 , ..., s n } is the set of sources that informed ϕ to Ag i .
Definition 3 (Sceptical Agent) A sceptical agent considers the number of sources from which it has received the information, and the trust value of each such source, in order to have some form of social trust value.
A sceptical agent considers the quantity of sources that the information ϕ comes from, caring about the social value.Therefore, we use a formula that sums the trust values of sources that information ϕ has been received from by Ag i , determining a social trust value as follows: where S + ϕ = {s 1 , ..., s n } is the set of n different sources of ϕ and S − ϕ is the set of sources for ϕ.For example, considering Ag i as sceptical, we have S(ϕ) = {ag1, ag2, ag3} and S(ϕ) = {ag4}.It will get trb i (ϕ) = 0.3+0.4+0.5 4 = 0.3 and trb i (ϕ) = 0.8 4 = 0.2.Therefore, the agent could decide for ϕ.
Both profiles can be interesting for certain domains.But for the domains that they do not seem interesting, many other profiles can be developed, according to the application needs.Also, we can expand the trust evaluation of some operators, as it is the case for the ⊕ tr operator, presented before.It can be redefined to allow an sceptical agent Ag i to consider the m number of paths between Ag i and a source s j , to calculate tr(Ag i , s j ).For example, consider that with the max operator, we have tr (Ag i , s j ) = 0.6, while np(Ag i , s j ) = 1, where np is a function that returns the number of different paths from Ag i to s j .This way, we have tr (Ag i , s j ) = tr (Ag i , s j ) 1 .Now, consider another source s k , with tr (Ag i , s k ) = 0.6 acquired using the max operator too.But, now we have np(Ag i , s k ) = 4.This way, we have that tr(Ag i , s k ) = max{tr (Ag i , s k ) 1 , . . ., tr (Ag i , s k ) 4 }.Then, it is possible to Ag i to consider s j more trustworthy then s j , as long as Ag k consider the number of paths to the source.

Trust on Arguments
As described above, the argumentation-based reasoning mechanism presented in [Panisson and Bordini 2016] allows agents to solve conflicts between arguments, but some conflicts may remain unresolved.In such cases, we can use meta-information in order to define preferences between conflicting arguments.
Considering the meta-information of trust, we are able to apply the trust on beliefs into the argumentation context, calculating trust value of an argument, in order to decide on those conflicting arguments by comparing such trust values.The approach presented here is applicable to both premises and inference rules as used in [Panisson and Bordini 2016], given that inference rules are represented using special predicates in the format of AgentSpeak beliefs.The trust value of an argument depends on the trust value of each element in its support (in our case, premises and inference rules, both stored as beliefs).
Definition 4 (Trust on arguments) Let S, c be an argument, its trust value is given by the trust of its support S, as follows: tra( S, c ) = trb(ϕ 1 ) ⊗ tra . . .⊗ tra trb(ϕ n ), with S = {ϕ 1 , ..., ϕ n } being the support of the argument.
Considering the micro profiles introduced in Section 4.1, the generic operator ⊗ tra can be defined as: (i) credulous agents use ⊗ tra as the maximum trust value, i.e., taking the highest trust value present in the argument's support set as the trust value for the argument as a whole: tra( S, c ) = max{trb(ϕ 1 ), . . ., trb(ϕ n )}; and (ii) sceptical agents use the minimum value for ⊗ tra , considering the lowest trust value present in the argument's support set as the trust value for the argument: tra( S, c ) = min{trb(ϕ 1 ), . . ., trb(ϕ n )}.
When agent Ag i has multiple arguments for the same conclusion c, for example, the argument S 1 , c and S 2 , c , the agent can opt for the argument that has the highest trust value: argument( S, c ) = max{tra( S 1 , c ), . . ., tra( S n , c )}.Therefore, when we have an unresolved conflict between two arguments, we can solve the conflict by looking at the trust values, as follows.
Although we introduced two simple agent micro profiles above, clearly other profiles and instantiations for the generic operators could be used, as suggested in [Parsons et al. 2011, Tang et al. 2011, Melo et al. 2016c].
Considering the stock market scenario presented in Figure 2, now agent sh is able to use the trust meta-information in its reasoning.The trust value of each argument, can be seen in Figure 4. Remember that, the trust value of the argument is based on the trust value of its beliefs.And the trust value of a belief is based on the trust values of its sources.This way, the trust values on the arguments in Figure 4 are directly determined according to the trust values on the advisers.In practice, in our scenario, these trust values could be obtained through the accumulation of experiences between sh and the advisers.In other words, the sh could, for some time, ask suggestions from the advisers, but not invest, just evaluate the results it would obtain according to the suggestions, using these results to determine the trust value of each adviser.Now, using trust, sh could finally make a decision for the argument with the greatest trust value.However, there is a problem in this particular case: in Figure 4, it can be seen that Arg2, Arg3 and Arg4 have the same trust value.It shows that, the use of trust reduce the number of undecided conflicts, but, in some cases, not eliminate them.For these cases, the agent needs to consider other meta-information on its reasoning.

Time Approach
In multi-agent systems, there can exist many different environments, each one with its particular characteristics.Some of these environments are dynamic, i.e., they are constantly changing.The consequence of these constant changes is that, if the agents do not perceive these changes, they will have outdated beliefs on their belief base.Considering this dynamic characteristic of an environment, the trust approach may seem not enough.That is why we can consider the time of an information on the agent's reasoning, and at some scenarios, the time information may seem even more important than the trust.
First, it is important to define how an agent can keep the time of an information into his belief base.In Jason [Bordini et al. 2007], a belief ϕ can be annotated with each of its sources, and this annotation method can be extended to keep the time when ϕ was received from each source.
Into a dynamic environment, the search of the most recent information usually is related to the search of the most accurate one.This is not true for all environments, as in some of them, the oldest information can be considered the most consolidated.As an example, consider the first case, where the most recent information usually is the most accurate ones.Following this, there may be cases when an agent Ag i receives ϕ from s j , and some time later, Ag i receives ϕ from s j .Considering that the only source of ϕ and ϕ is s j , using time, Ag i can easily decide for ϕ, as it is the most recent information and trb i (ϕ) = trb i (ϕ) = tr(Ag i , s j ).Following this approach, we can determine a timeline Table 2 presents that at time 1 and 2, Ag i believes in ϕ, while at time 3 and 4, it will believe in ϕ.

Using only Time
Time can have different approaches, just as trust and other meta-information can have too.Here, we present two approaches: first, we define some functions to consider only time.For this first approach, it is interesting to make a relation between the times when an information ϕ was received and the number of sources that informed ϕ.The second approach, in Section 5.2, presents how time can be used with trust.The importance of showing both approaches is to present, in practice, that one meta-information can be used through different points of view, depending on the domain.
To consider only time, we define the relevance of a belief ϕ as a value determined by a relation between the number of sources of ϕ and the time when each source informed ϕ.In other words, we have two formulae, R t and Rel: R t (ϕ, t) = (ns(ϕ, t) tme t) nrm ns(ϕ, t) where t is some time of the system (e.g., time 1, 2, 3 or 4 at Table 2), ns(ϕ, t) returns the number of sources that informed ϕ at time t, tme is a generic relation that relates the number of sources and the time when they informed ϕ, and nrm is a relation to consider the information relating to the sources that informed ϕ at the same time.nrm is useful to make the result of R t a value between 0 and 1, considering the number of sources that informed ϕ and ϕ. Rel where {t 0 , . . ., t n } is the set of times when ϕ was informed (e.g., time 1 and 3 for ϕ at Table 2), and rt is a generic relation that shall be implemented according to the agent micro profile for time, relating all the R t of ϕ.
It is important to emphasise that the implementation of tme , nrm and rt will determine if the agent prioritises the most recent or the oldest information, depending on the environments.Following this, we briefly exemplify two micro profiles for agents that consider time: Definition 7 (Dynamic Agent) A dynamic agent considers the most recent information received as the most accurate.
A dynamic agent implements R t (ϕ, t) as: where ∆T = now t , where now is the actual time of the system (e.g, time 4 in Table 2).This formula consider the number of sources for ϕ, dividing it by the total number of sources for ϕ and ϕ.This division is important to keep the value of R t between 0 and 1.The multiplication in R t is important to consider the time t, where how oldest is t, lesser will be R t value.Definition 8 (Conservative Agent) A conservative agent consider the oldest information as the most consolidated, prioritising it.
A conservative agent implements R t (ϕ, t) as: that formula is similar to the R t formula of the dynamic agent, but, instead of ∆t, we use only t, so that the smaller t is, greater R t is.
Both agents profile implement the Rel(ϕ) formula as: where T = {t 1 , . . ., t n } is the set of times when ϕ was informed and |T | is the length of T .This formula keeps the two properties we were seeking for, that were: (i) how oldest an information, less relevant it should be considered, and (ii) the more sources an information has, more relevant it should be considered.Remembering that other micro profiles can be defined too, according to the domain.

Using Time with Trust
The second approach we present in this work considers both time and trust, where time is used to appropriately apply the trust value.This approach is interesting for dynamic environments where trust is useful, as the stock market example.
To present this approach, we introduce some functions that will be used.The function that returns the most recent time when a belief ϕ was received by Ag i , from a source s j , is defined as time i (ϕ, s j ).For example, considering Table 2, the most recent time when ϕ was received by Ag i , from s j , is acquired using time i (ϕ, s j ) = 1.As a belief ϕ can be received from different sources, at different times, in Jason we can define one annotation for each source s k of ϕ, with the time when s k informed ϕ.For example, blue(box)[source(ag1),source(ag2),time(t1),time(t3)] means that the agent received blue(box) from Ag 1 at time t1 and from Ag 2 at time t3.If a same source informed ϕ at different times, both times can be annotated.
Considering an agent Ag i , we define a function trs i (ϕ, s j ) that returns the trust value on ϕ considering only s j as source at time time(ϕ, s j ).Considering S(ϕ) = {s 1 , s 2 , . . ., s n } as the set of sources for ϕ, then Ag i can have different trs values for ϕ associated to each source.Here we define a generic operator trs to relate the trust on a source s j with the last time when s j informed a belief as trs i (ϕ, s j ) = tr i (Ag i , s j ) trs (now − time(ϕ, s j )), where now is the actual time.Still considering the set S(ϕ), we define another function trt i (ϕ) that relates all the trs i values for an information ϕ, resulting on the trust that the agent Ag i has on ϕ, according to the time and the trust on the sources.So, considering a generic operator trt , we have trt i (ϕ) = trs i (ϕ, s 1 ) trt . . .trt trs i (ϕ, s n ).

Using Time with Trust on Arguments
Besides the use of time and trust on the agent's decision making, they can also be applied to the argumentation-based reasoning mechanism presented in Section 2. As arguments are built using different information, we can combine the trt value of each information, defined on Section 5.2, to define a trust value for the argument, considering the time and trust of each premise and inference rules5 .Some initial contributions towards this approach can be found in [Melo et al. 2016a, Melo et al. 2016c].
As an example, consider that Ag i is sceptical relating to trust, but it considers time to calculate the trust value of an argument.To present a possible formula for Ag i , consider the following definitions: S + ϕ (t) is the set of sources that informed ϕ at time t and S − ϕ (t) is the set of sources that informed ϕ at time t.Then we can define the function trbt(ϕ), that returns the trust on a belief ϕ, considering only the sources that informed ϕ or ϕ on an specific time t, as: The function trbt is pretty similar to trb defined for sceptical agent on Section 4.1.To consider the time information, prioritizing for the most recent information (the opposite approach could be used too), now we define a function trat i ( P, c ), for an agent Ag i , as: where P, c is an argument for c with the set P of premisses, T is the set of times when ϕ was informed, ∆T = now/t, with now being the current time in the environment, |T | the length of T and |P | is the length of P .The trat formula preserves both properties that we are seeking for: (i) the oldest an argument is, less trusted it should be and (ii) the more reliable the sources are, more trusted the argument should be.Following the stock market example, now consider that the agent sh uses trust and time on arguments, prioritising the most recent information received.The result can be seen on Figure 5, where the trat sh value for each argument can be seen.This approach reduced the conflicting arguments, that were three using trust and argumentation, to two.As it was described, the use of meta-information reduces the number of conflicts, but does not solve them all.The more meta-information considered, the more cases will be solved, but there will still exist the cases that may not be solved.For this example, a new meta-information may be useful, and in the next section, we will consider the expertise of each source.

Expertise Approach
The last meta-information we will present here is expertise.The expertise of a source is an interesting approach when, in the environment, agents are specialists relating to specific kinds of information.For example, it is more reasonable to consider the opinion of a doctor who is expert/specialised on the particular health problem than the opinion of a general practitioner6 .Our approach to consider the expertise of the source uses reasoning patterns, for example, the so-called argumentation schemes [Walton et al. 2008].Regarding to expertise of the sources, Walton [Walton 1996] introduces the argumentation scheme called argument from position to know, described below7 : Major Premise: Source a is in a position to know about things in a certain subject domain S containing proposition A. Minor Premise: a asserts that A (in domain S) is true (or false).Conclusion: A is true (or false).
The associated critical questions (CQs) for this argumentation scheme are: • CQ1: Is a in a position to know whether A is true (or false)?• CQ2: Is a an honest (trustworthy, reliable) source?• CQ3: Did a assert that A is true (or false)?Therefore, due to the defeasible nature of the reasoning pattern, it can be analysed in a dialectical way, and its conclusion is evaluated through the critical questions.If the pattern of reasoning is valid, the trust value of that information can be implemented considering that it comes from an expert source, and there are no reason to doubt that.
As we are dealing with a value-based framework, it is necessary to attribute some kind of values for expert sources, including how much critical questions are correctly answered.These values could depend on the domain, where safety-critical applications such as the ones related to health could give greater consideration to the expertise of the source.On the other hand, in some domains the expertise may not be an interesting approach, for example, relating to weather: the consequences of taking or not the umbrella are not as strong as in the case of a wrong diagnosis of a serious illness.
The argumentation scheme from position to know considers a few metainformation relating to the information received: how much the source is trustworthy?Is the source in a position to know such subject?Was that source that provided such information directly?Following the stock market scenario, consider sh has the following trust values on the sources below and now, there is another source, ex 1 , that is an expert about food market, more specifically about oranges and soybean markets, represented by gi: Source/Belief Trust Value adv 3 0.8 adv 4 0.8 ex 1 0.9 expert(ex 1 , gi) 1.0 Considering the values on Table 3, we will present two different approaches: first, considering that adv 4 tells sh that ex 1 told him the argument greatest investment(orange) that orange market is a good investment, the agent sh can increase the trust on the argument when it asks directly to ex 1 about greatest investment(orange); second, sh can ask an expert about his opinion about the investments.Following both approaches, we define two agent micro profiles: Definition 9 (Suspicious Agent) A suspicious agent considers only the trust value for the source who provided the information to it, and ignores the trust on the original source who provided the information to that agent who informed the suspicious agent.
Consider that adv 4 informed sh that ex 1 told him that the investment in soybean is a good investment.So, sh will define the trust level on greatest investment(soybean) according to the trust level on adv 4 .But, considering that sh asks directly ex 1 about greatest investment(soybean), it can increase the trust level of this argument, because ex 1 is the original source for this argument, and the trust on ex 1 is bigger than the trust on adv 4 .As observed, for a suspicious agent, even when receiving the information directly from the source, it aggregates the trust it has over the source, and not over the expertise of the source.To consider the expertise rather than the trust over the source can be very useful in some application domains, mainly due to the fact that trust values can be learned from experience, while the expertise of that particular source could be acquired from a reliable newspaper, web-page, and so on.
Definition 10 (Expertise-recogniser Agent) An expertise-recogniser agent considers the trust value of the information based on how much the source is an expert in that subject.
Considering our scenario, if sh is an expertise-recogniser agent, the trust value for great investment(soybean), received from ex 1 , becomes 1.0.Therefore, considering now that sh is an expertise-recogniser agent, it finally can decide to invest in the soybean stock market instead of the orange stock market.The result is represented in Figure 6.

Agents Profiles
As it was presented on Section 3, an agent can have different profiles for different metainformation, called here micro profiles.In this section, we will briefly present how an  agent can have a macro profile, that is a profile that combines different meta-information on its reasoning.The macro profiles, as the micro profiles, can be defined according to the domain.
There are two approaches to combine meta-information.The first approach is to create an ordered list of the meta-information implemented.This way, when the n st metainformation on the list cannot decide the conflict, the n st + 1 meta-information will try to decide it.If neither meta-information on the list can decide it, the conflict remains undecidable.
The second approach is to combine different meta-information and consider all of them at the same time when needs to make a decision, occurring that one metainformation will directly impact on the other.For example, when an agent Ag i wants to define the trust value on a belief ϕ, it may consider the time when ϕ was informed.Now we will present an example of macro profile, that combines trust and time to define the trust value trb of an argument.At Section 4.1, we define two micro profiles to consider trust.Now we will change the sceptical profile, combining it with the time micro profile, dynamic agent, to consider trust and time on its formula.This way, we define the Sceptical Dynamic Agent: Definition 11 (Sceptical Dynamic Agent) A sceptical dynamic agent considers the number of sources, the trust value of each source and the time when each source informed the belief, prioritizing the most recent time, to determine the trust value of the belief.
A sceptical dynamic agent implements trb as follows: where ∆T = now t , as now being the actual time of the system.This formula consider the number of sources for ϕ, their trust values and the time when each source informed ϕ.Other macro profiles can be defined.For example, combining trust and expertise, or trust, time and expertise.There are many possibilities, even combining metainformation not defined here.The macro profile definition will depend on the domain requests.

Related Work
In [Tang et al. 2011], the authors combine argumentation and trust, taking into account that trust on information are used in argument inferences.That work is based on work presented in [Parsons et al. 2011], which proposes a formal model for combining trust and argumentation, aiming to find relationships between these areas.Our work differs from [Parsons et al. 2011, Tang et al. 2011], given that we introduce an approach for computing trust values for beliefs that differs from [Parsons et al. 2011, Tang et al. 2011], where trust on a piece of information is assumed to be more directly available.Also, different from those approaches, we allow to combine different sources for the same information (which is often the case in Jason agents) into a single trust value for that information.Further, we define agent profiles to facilitate the development of agents that require different perspectives on the trust values of multiple sources; this is not considered in [Parsons et al. 2011, Tang et al. 2011] either.
In [Amgoud and Ben-Naim 2015], the authors propose a new family of argumentation-based logics (built on top of Tarskian logic) for handling inconsistency.
In [Amgoud and Ben-Naim 2015], it is defined an approach in which the arguments are evaluated using a "ranking semantics", which orders the arguments from the most acceptable to the least acceptable ones.The authors argue that, with a total order of arguments, the conclusions that are drawn are ranked with regards to plausibility.Although [Amgoud and Ben-Naim 2015] does not use meta-information, the proposed approach provides ordered arguments thus avoiding unresolved conflicts.Our approach follows the same principles, but providing different meta-information that can be used to evaluate the arguments.
In [Parsons et al. 2012a], the authors identify ten different patterns of argumentation, called schemes, through which an individual/agent can acquire trust on another.Using a set of critical questions, the authors show a way to capture the defeasibility inherent in argumentation schemes, and they are able to assess whether an argument is acceptable or fallacious.Our approach differs from [Parsons et al. 2012a] in that we are not interested in agents arguing about the trust (or any other meta-information) they have on each other.We are interested in using such meta-information (possibly combined) with an argumentation-based reasoning mechanism in order to resolve undecided conflicts between arguments.
In [Adrián Biga 2014], the authors present the G-Jason, an extension of Jason which allows the creation of more flexible agents to reason about uncertainty.The authors use an representation of belief degrees and grades using the annotation feature provided by Jason.Thus, the authors define the annotation degOfCert(X), where X is a value between 0 and 1, as a value associated with certainty of a belief and planRelevance(LabelDegree) as a value associated with plans, where the LabelDegree value is based on its context and its triggering event's degOfCert level.Our approach differs from [Adrián Biga 2014] in that we use different meta-information in order to infer a level of certainty on beliefs, and in our case from belief certainty we calculate the certainty of arguments as well.In [da Costa Pereira et al. 2011], the authors present an approach for agents not to miss information that is currently incorrect about the environment.That work proposes a framework for changing the agent's mind without completely erasing previous information.The authors use possibility theory to represent uncertainty about information, using a fuzzy labelling function that sets a trust degree n to sources and arguments, where n ∈ [0, 1].Our approach differs from [da Costa Pereira et al. 2011] in some aspects.First, the authors in [da Costa Pereira et al. 2011] define two agent profiles: optmistic and pessimistic.Consider an argument A and S(A) = {a 1 , ..., a n } as the set of sources of A. An optimistic agent will set the trust of A according to the most reliable source a i ∈ S(A) and a pessimistic agent will set the trust of A according the least reliable source a j ∈ S(A).Our approach considers that the trust of an argument is defined according to the trust of its beliefs, and the trust of a belief is defined according to the trust of its sources.Our approach allows for a social perspective, as a sceptical agent will consider the number of sources of each belief to set its trust value.In [da Costa Pereira et al. 2011], it is stated that if an agent believes in ϕ, it could not believe in ϕ.Our approach differs in this aspect too, as we allow an agent to believe in ϕ and ϕ.Another interesting difference is that we extend the work [Panisson et al. 2014], which use defeasible logic, while [da Costa Pereira et al. 2011] uses a fuzzy approach and possibility theory.Furthermore, besides trust, we consider other meta-information in our work.
There is previous work considering time in multi-agent systems, for example [Braubach et al. 2006, Moreau 2005, Li and Zhang 2010].However, most of them focuses on synchronisation between information received by the agents.For example, [Braubach et al. 2006] provides a middleware service component for time management, useful for distributed multi-agent systems, while [Moreau 2005] studies a model of network of agents interacting via time-dependent communication links, using graph theory and system-theoretic tools to analyse the convergence of individual agents' state to a common value according to the information received by others agents.In our work, we focus on time being used as a meta-information for the agent to decide between conflicting beliefs.For our approach, all the multi-agent system needs is to record the current time when an information is received by an agent.To the best of our knowledge, there is no work considering the meta-information of time in argumentation frameworks.
Regarding expertise in multi-agent system, normally such meta-information is related to the role agents play in the multi-agent organisation, for example, the MOISE [Hubner et al. 2007] and Electronic Institutions [Esteva et al. 2001] organisational models.In such models, when an agent plays the role of a doctor, for example, the agent has to be an expert in medicine (it is assumed that the agent has the right capabilities to achieve the goals expected of the agents playing that role).Such meta-information can be easily extracted from such organisation models.Other approaches for considering the expertise of the sources of information in multi-agent systems are normally modelled using reasoning patterns similar to the one described in Section 6.For example, in [Toniolo et al. 2013], the authors describe an approach in which the sources' expertise is fundamental in the process of intelligent analysis (forming hypotheses and testing those against evidence).While most of that work focuses on the use of expertise to create arguments to support a particular decision/conclusion, here we use expertise as a meta-information to define preferences between arguments from different sources with different degrees of expertise.

Final Remarks
In this work, we describe our recent research regarding meta-information in multi-agent systems.In particular, this work extends our previous work [Melo et al. 2016b, Melo et al. 2016a, Melo et al. 2016c].While in our previous work we have described how the meta-information of trust could be explored in order to help agents to decide between conflicting beliefs and arguments, in this work we extend this first idea, using different meta-information (e.g., time and expertise) in order to define micro profiles.Also, we described how different micro profiles (defining how agents consider each type of meta information) can be combined to define macro profiles, describing on their turn how agents will consider the combination of different meta-information available to them.As a result of our investigation, we have developed a modular framework for meta-information in multi-agent systems, as well as we have extended the argumentation-based reasoning mechanism presented in [Panisson andBordini 2016, Panisson et al. 2014], considering meta-information when the reasoning mechanism cannot resolve conflicts between arguments.Thus, agents are able to make a decision (regarding argumentation or not) when they could not have a clear one.
In our approach, when we define preferences between conflicting arguments, we are combining the symbolical representation from classical argumentation and approaches for value-based argumentation (i.e., approaches that define preferences between arguments).Such approach is very powerful, given that agents are able to make decisions they could not make without such preferences.In particular, our approach derives such values from meta-information that are easily available in multi-agent systems, which demonstrate the applicability of our approach.
Further, we have illustrated our approach using a stock market scenario, showing that the approach allows agents to consider different meta-information available in the multi-agent systems in order to make a decision about different opinions regarding different investment options.

Figure 1 .
Figure 1.Graph representing the attack relation based on arguments in ∆.

Figure 2 .
Figure 2. Stock market environment.The agent sh, using only argumentation-based reasoning, does not know what believe.
Figure 3. Trust Network Example

Figure 4 .
Figure 4. Stock market environment.The agent sh, using argumentation-based reasoning and trust, reduce the number of conflicting arguments from four to three.

Figure 5 .
Figure 5. Stock market environment.The agent sh, using argumentation-based reasoning, trust and time, reduces the number of conflicting arguments to two.

Figure 6 .
Figure 6.Stock market environment.The agent sh, using expertise, after considering the other metainformation presented above, can finally decide to invest in soybean.

Table 1 . Different sources and its trust values
To calculate how much Ag i trusts on an information ϕ, it is necessary to know how much Ag i trusts on the sources of ϕ.Following this approach, we introduce the function trb : ϕ → R, where trb i (ϕ) returns the trust value that Ag i has on belief ϕ, according to MELO, V. S.; PANISSON, A. R.; BORDINI, R. H. Meta-Information and Argumentation in Multi-Agent Systems iSys | Revista Brasileira de Sistemas de Informação, Rio de Janeiro, vol. 10, No. 3, pp.74-97, 2017

Table 2 .
Information and Argumentation in Multi-Agent Systems iSys | Revista Brasileira de Sistemas de Informação, Rio de Janeiro, vol. 10, No. 3, pp.74-97, 2017 of the beliefs received.Consider the discrete time representation in Table 2, representing when the beliefs ϕ and ϕ were received by Ag i : Time when ϕ and ϕ were received MELO, V. S.; PANISSON, A. R.; BORDINI, R. H. Meta-