Thoughts following on from 11 January’s supervisor meeting.
Why would agents want to share knowledge? Naturally, sharing knowledge expands the knowledge-base of each agent. But sharing knowledge (through argumentation) also has other benefits, which include the eradication of false beliefs and resolution of conflicts.
As a particular example of the benefit of sharing knowledge and argumentation consider the 2001 paper ‘Dialogues for Negotiation: Agent Varieties and Dialogue Sequences’ [1]. The work presents a “formal, logic-based approach to one-to-one agent negotiation, in the context of goal achievement in systems of agents with limited resource availability”. The solution proposed is “based on agent dialogues, as a way of requesting resources, proposing resource exchanges, and suggesting alternative resources”. The paper also mentions of agent dialogues – in passing – two performatives that have an argumentative dimension:
- challenge – used to ask a reason (justification) for a past move.
- justify – used to justify a past move by means of a support.
These two performatives lay down the basis for extending and further generalising the proposed agent negotiation.
Argumentative approaches to agent negotiation (via dialogue) already exist [2, 3, 4, 5], some of which are “in a way more general” than the work presented in [1]. The real challenge, however, is to generalise the negotiation of [1] while retaining the logic-based properties (that lend themselves to theoretically provable results) and also not to make it less operational.
Dialogues are modelled by logic-based dialogue constraints, which are (possibly non-ground) if-then rules contained in the knowledge-base of agents of the form:
p(T) & C => p’(T + 1),
where p is the received performative (the trigger), p’ is the uttered performative (the next move) and C is a conjunction of literals in the language of the knowledge-base of the agent (the condition of the dialogue constraint). Intuitively, the dialogue constraints of an agent express policies between it and other agents. Currently, these policies remain unknown to other agents. Thus, in making or responding to requests, the reasoning behind decisions remains largely unknown. Further, allowing argumentation (challenges and justifications) to follow requests and responses has the potential to increase the number of possible deals and better the possible results. This will be demonstrated after a brief mention of the other aspects of the knowledge-base:
- domain-specific as well as domain-independent beliefs that the agent can use to generate plans for its goal, such as the knowledge describing preconditions and effects of actions;
- information about the resources available to the agent;
- information about the dialogues in which the agent has taken part in;
- information on the selected intention, consisting of the given goal, a plan for the given goal, as well as the set of resources already available to the agent and the set of missing resources, both required for the plan to be executable.
The purpose of the negotiation is for the agent to obtain the missing resources, while retaining the available ones that are necessary for the plan in its current intention. To illustrate the advantages of argumentation (and knowledge-sharing) in conflict resolution, consider three agents (a1, a2, a3) and five resources (r1, r2, r3, r4, r5) as follows:
a1: resources = {r1, r2}, intention = { plan(p1), available(r1), missing(r3), goal(g1) }
a2: resources = {r3}, intention = { plan(p2), available(r3), missing(r1), goal(g2) }
a3: resources = {r4, r5}, intention = { plan(p3), available(r5), missing(), goal(g3) }
In this example, the agent a1 needs resources r1 and r3 to make its plan p1 executable. Currently it has r1 but not r3. Further, it has another resource r2 that it does not need. In order for a1 to obtain r3, which is held by a2, it needs to make a request for it. However, r3 is also needed by a2 according to its own current intention and thus a2 will refuse a1’s request. Similarly, a2 requires r1 according to its current intention but cannot obtain it since it is also required by a1, which currently holds it.
Since neither a1 nor a2 can proceed there needs to be some sort of conflict resolution. One way of achieving this would be for a1, who made the request, to change its plan and try to fulfil its goal with a new plan that needs a different set of (obtainable) resources. Alternatively, a1 could challenge a2 as to why it refused, and based on the justification try to convince a2 to give up the resource r3.
Suppose that a1 makes a request to a2 for resource r3, which is refused. Assuming that a1 cannot form an alternative plan it challenges the refusal. a2 justifies the refusal by declaring that it needs r3 as part of its current plan p2 to achieve the goal g2. Suppose also that there is an alternative plan p2’ to p2 for the goal g2 that involves resources r2 and r4, and a1 knows this. Now a1 has r2 in its possession and can obtain r4 from a3, and can thus propose that a2 change its plan to p2’, accept resources r2 and r4, and hand over r3 in the exchange; thus resolving the conflict.
No assumption is made on how plans for intentions are generated. It is assumed, however, that the intention is such that the plan it contains allows the goal to be achieved. As well as conflict resolution illustrated in the example above, argumentation could also be used as a means of convincing agents that their intentions are impossible to achieve. By doing so, agents can potentially convince each other to modify their plans/intentions and thus agree to more resource exchanges than would otherwise be possible.
As an illustration of argumentation used to eradicate false beliefs or infeasible plans, consider the three agents and five resources from the previous example. Assume that r1 is required for a1 to achieve its goal of g1 regardless of the chosen plan (i.e. it will never give it up). A request from a1 to a2 for r3 will result in refusal since a2 also needs r3. a1 can follow this refusal with a challenge questioning the refusal. a2 will respond justifying its need for r3 (and r1) to carry out its plan p2. Since r1 is indefinitely unobtainable, a1 can then follow the justification notifying a2 of this and suggesting that a2 change its plan. This will cause a2 to change its plan and may result in a2 handing r3 to a1 as required.
Potentially agents should be able to share and modify any aspect of their knowledge-base, including the dialogue constraints. In the case of dialogue constraints, sharing this would be useful to understand what response an agent would give under certain conditions or to understand the conditions that gave rise to a certain response. In the case of sharing dialogue history and knowledge of resource holders, this would be useful to avoid redundant communication. Sharing intentions has benefits as illustrated in the examples above.
1 comment:
References
[1] F. Sadri, F. Toni, and Paolo Torroni. Dialogues for negotiation: Agent varieties and dialogue sequences. 2001
[2] L. Amgoud, S. Parsons, and N. Maudet. Arguments, dialogue and negotiation. 2000
[3] S. Kraus, K. Sycara, and A. Evenchik. Reaching agreements through argumentation; a logical model and implementation. 1998
[4] S. Parsons, C. Sierra, and N. R. Jenkins. Agents that reason and negotiate by arguing. 1998
[5] K. Sycara. Argumentation: Planning other agents’ plans. 1989
Post a Comment