Tuesday 30 January 2007

Literature Discussion

Thoughts following on from 29 January’s supervisor meeting.

Discussion regarding ‘Games that agents play’

In what sense is the formalism “logic-based”? What does it mean to be “logic-based”?

Firstly you have informal dialogue. An example of such would be a natural discussion between two people; there are no real rules that govern the interaction and it would most likely occur in an ad hoc manner.

Moving on from this, there is formal dialogue, and this is normally what is meant by “logic-based”; formal in the sense that there are rules that govern the interaction.

The next step up (and final step) is algorithm-based dialogue (i.e. an implementable formalism). This is in essence the reason for computer scientists formalising (“logicising”) dialogue, since our objectives are not primarily philosophical.

With so much undefined, should they be allowed to make a claim that “the formalism is computational”?

Not really, but unfortunately this is the case with a lot of papers. It should not be enough to solely present a framework and say that it is “logic-based” and that it is “the” solution. You need to prove that it works (which is the essence of making it logic-based) and that it is “a” solution. Further, you need to show (in some way?) that it is better than other frameworks.

They say, the framework “is potentially generative. For it to be so, we would need to have procedures which could automatically generate each of the type of dialogues if and when required.” The proposal is so abstract, and the work yet to do is much greater than the work that remains, so, again, can they make such claims?

Again, no. There needs to be “proof”. The realisation/implementation needs to be demonstrated.

They speak of Reed’s formalism [Dialogue frames in agent communication] as “descriptive rather than generative, in that it does not specify the forms of utterances, nor the rules which govern their formation, issuances and effects”. But to be honest, a large part of this can be attributed to this paper also, even though they say “the Agent Dialogue Framework we have proposed… is potentially generative as well as descriptive”.

Yes. A lot of papers that you will find will be wordy despite assurances to the contrary.

Discussion regarding ‘Automated Negotiation’ and the opening two chapters of ‘Argumentation-Based Negotiation’

What exactly is a game-theoretic/heuristic approach to negotiation?

In such approaches there are many assumptions lend themselves well to competitive games such as chess and the prisoner’s dilemma, where agents are self-interested, have full knowledge of the world and know what they want (i.e. to win the (well-defined) game). However, there would definitely be occasion where such characteristics do not apply well to negotiation/argumentation settings. Since agents may be altruistic, may not have full knowledge of the world, resources and other agents, and where agents may not be able to correctly classify what it is that they want until they go through some rounds of dialogue.

In a particular negotiation – what would be going on? Can the work of Sadri et al. or that of my Master’s project be classified as game-theoretic?

In the sense that agents take turns, make strategic moves and have goals, yes it could be considered game-theoretic. However, the basic assumptions of game-theory do not apply as discussed above.

“Argumentation-based approaches allow for more sophisticated forms of interaction than their game-theoretic and heuristic counterparts.” How and why?

Argumentation would not be limited to making/accepting/rejecting proposals and counter-proposals.

Do the attacks on game-theory by the argumentation community have solid basis? Would a game-theorist have some sort of response to each of the attacks? What would they say?

Naturally they would defend their work in an almost religious manner. It would be worth reading some recent literature to get a better understanding of game theory and to be able to better justify your approach as opposed to a game-theoretic approach – however, not to spend too much time on it.

Discussion regarding most argumentation literature

The communication (negotiation, argumentation, whatever) in most literature is assumed to be between two agents. This is in a sense cheating. A lot of the really interesting and challenging problems lie in multiple agent dialogue.

There could be problems during the dialogue. For example, suppose two agents are negotiating and the first of the two makes a commitment to the other. But then during the course of the dialogue, a third agent comes in and offers the first agent a better deal. What should the first agent do?

There could also be problems after a particular dialogue has closed. For example, suppose two agents have ended an argumentation and mutually agreed on a particular standpoint for a proposition. Suppose then that a third agent correctly convinces the first agent of a conflicting standpoint. Now the second agent unknowingly holds an incorrect standpoint. Does this need to be corrected? How? Perhaps the first agent has a duty to inform it? Perhaps there is a central blackboard for corrections? Perhaps a global announcement from the first agent is due?

There is an abundance of work on one-to-many (centralised) negotiation like auctioning, as described in [1] for example, however, many-to-many (distributed) argumentative negotiation seems to be an open challenging area to head towards.

References
[1] P. Torroni, F. Toni. Extending a logic-based one-to-one negotiation framework to one-to-many negotiation. 2002

Wednesday 24 January 2007

Discussion of Research Plan and Milestones

Thoughts following on from 18 January’s supervisor meeting.

Discussion of Reseach Plan
As well as further reading into and understanding of the open issues mentioned previously, the initial plan is as follows:

Firstly, identify and classify the different types of dialogue (enquiry, information-seeking, persuasion, negotiation, eristic etc). Of particular interest will be the work of Douglas Walton [1] and Peter Mcburney [2]. At this stage, to look at these dialogues independent of argumentation. Following on from this, if possible, to bring the different dialogues together into a single all-incorporating approach, and to consider the role argumentation can play in this.

Secondly, work on argumentation for/with dialogues. Of particular interest will be the work of Henry Prakken, Leila Amgoud, Simon Parsons, Chris Reed and Peter McBurney. Argumentation will be investigated as a tool and way of serving the purpose of communication, that is:
- Achieving the different types of dialogue;
- “implementing” negotiation;
- building a joint line of reasoning between agents, i.e., expressing and sharing internal evaluations of argument pros and cons;
- agents working together to come up with plans of action.

These first two issues can be seen as two separate levels:
Firstly, the specification. What are the different ways of making communication? What different dialogues are there? What are the requirements of the communication?
Secondly, the realisation. How can the communication be modelled? Can the different types of dialogue be tied together using argumentation? If so, how? If not, to what extent can argumentation be used to bring various communication dialogues together?

Thirdly, investigate possible architectures for negotiating agents (i.e. what goes on inside the agents and guides the dialogues), starting from the abstract BDI framework.

Discussion of Milestones
There are four main milestones that will guide the work. These are as follows:

Firstly, decide upon an agent model – possibly the BDI approach, with bridge rules linking the beliefs, desires and intentions. At this stage, to keep the formalisation simple and abstract, since the purpose here is to create a generic model.

Secondly, build a negotiation model where dialogues incorporate
- persuasion
- information-seeking
- inquiry (for pooling joint knowledge)
- negotiation
- deliberation (for agreeing on a joint action, possibly by modifying desires)
These features should co-exist in the new model to provide the outcome we need, which is to increase the number of solutions than would be possible by a subset of these alone. Another possibility to be considered is nested dialogues (i.e. one dialogue within another). To start with, the work of Simon Parsons and Peter McBurney will be checked, as well as the work of Douglas Walton and Erik Krebbe.

Thirdly, define agent policies – required for examples and beyond. The policies are concrete definitions of the agent behaviour. They determine how agents are to go about achieving the negotiation, and the outcomes resulting from agent dialogues.

Fourthly, realisation by means of making concrete choices:
- Dialogue constraints
- Argumentation
- Concrete agent architecture – for example, KGP-like. Making decisions on how we are going to do it. For example, deciding where the knowledge comes from.
This will be the final stage which will be close to (and lend itself to) implementation. Note that realisation does not mean implementation per se.

Further Discussion
The possibility of doing a survey paper on ‘Argumentation for Negotiation’ or going through the different approaches/aspects to argumentation in negotiation, such as priorities in rules, defeasibility etc. The objective will be to avoid making a waffly in favour of a more technical survey.

Also, to consider allowing for agent goals to change. Since the goal pretty much characterises the agent, this may not be possible except by agents shifting between (a set of) pre-defined goals rather than goals completely anew. Of initial interest is the work of Antonis Kakas [3] about goal decision in autonomous agents.

Tuesday 23 January 2007

5, Automated Negotiation

Notes taken from ‘Automated Negotiation: Prospects, Methods and Challenges’ (2001), by N. R. Jennings et al.

“… This paper is not meant as a survey of the field of automated negotiation. Rather, the descriptions and assessments of the various approaches are generally undertaken with particular reference to work in which the authors have been involved…”

The major contribution of this paper has been to:
- Examine the space of negotiation opportunities for autonomous agents;
- Identify and evaluate some of the key techniques;
- Highlight some of the major challenges for future automated negotiation research.
- Lay the foundations for building flexible (persuasive) negotiators.
- Argue that automated negotiation is a central concern for multi-agent systems research.
- Develop a generic framework for classifying and viewing automated negotiations, and then using it to discuss and analyse the three main methods of approach that have been adopted to automated negotiation.

1, Introduction
Agent interactions can vary from simple information interchanges, to requests for particular actions to be performed and on to cooperation and coordination. However, perhaps the most fundamental and powerful mechanism for managing inter-agent dependencies at run-time is negotiation – the process by which a group of agents come to a mutually acceptable agreement on some matter…

Automated negotiation research can be considered to deal with three broad topics:
- Negotiation Protocols: the set of rules that govern the interaction…
- Negotiation Objects: the range of issues over which agreement must be reached…
- Agents’ Decision Making Models: the decision making apparatus the participants employ to act in line with the negotiation protocol in order to achieve their objectives…

2, A generic framework for automated negotiation
Negotiation can be viewed as a distributed search through a space of potential agreements… For a given negotiation, the participants are the active components that determine the direction of the search…

… The minimum requirement of a negotiating agent is the ability to make and respond to proposals… However, this can be very time-consuming and inefficient since the proposer has no means of ascertaining why the proposal is unacceptable, nor whether the agents are close to an agreement, nor in which dimension/direction of the agreement space it should move next…

… The recipient needs to be able to provide more useful feedback on the proposals it receives, in the form of a critique (comments on which parts of the proposal the agent likes or dislikes) or a counter-proposal (an alternative proposal generated in response to a proposal)…

On their own, proposals, critiques and counter-proposals mean that agents, for example, cannot justify their negotiation stance or persuade one another to change theirs. On the other hand, in argumentation-based negotiation, the negotiator is seeking to make the proposal more attractive (acceptable) by providing additional meta-level information in the form of arguments for its position.

Arguments have the potential to increase the likelihood (by persuading agents to accept deals that may previously have rejected) and/or the speed (by convincing agents to accept their opponent’s position on a given issue) of arguments being reached. Common categories of arguments include:
- Threats (failure to accept this proposal means something negative will happen to you);
- Rewards (acceptance of this proposal means something positive will happen to you);
- Appeals (you should prefer this option over that alternative for some reason).

3, Game theoretic models
Game-theory is a branch of economics that studies (analyses and formalises) interactions between self-interested agents… In order for an agent to make the choice that optimises its outcome, it must reason strategically (i.e. take into account the decisions that other agents may make, and must assume that they will act so as to optimise their own outcome).

… It turns out that the search space of strategies and interactions that needs to be considered has exponential growth, which means that the problem of finding an optimal strategy is in general computationally intractable

Game theoretic techniques can be applied to two key problems:
1. The design of an appropriate protocol that will govern the interactions between the negotiation participants. [1]
2. The design of a particular (individual-welfare maximising) strategy (the agents’ decision making models) that individual agents can use while negotiating.

Despite the advantages, there are a number of problems associated with the use of game theory when applied to automated negotiation:
- Game theory assumes that it is possible to characterise an agent’s preferences with respect to possible outcomes… With more complex (multi-issue) preferences, it can be hard to use game theoretic techniques.
- The theory has failed to generate a general model governing rational choice in interdependent situations…
- Game theory models often assume perfect computational rationality meaning that no computation is required to find mutually acceptable solutions within a feasible range of outcomes. Furthermore, this space of possible deals (which includes the opponents’ information spaces) is often assumed to be fully known by the agents, as is the potential outcomes values… Even if the joint space is known, knowing that a solution exists is entirely different to knowing what the solution actually is.

3, Heuristic approaches
This is the major means of overcoming the aforementioned limitations of game theoretic models. Such methods acknowledge that there is a cost associated with computation and decision making and so seek to search the negotiation space in a non-exhaustive fashion. Thus aiming to produce good, rather than optimal solutions. The key advantages can be stated as follows:
- the models are based on realistic assumptions; hence they provide a more suitable basis for automation and they can, therefore, be used in a wider variety of application domains;
- the designers of agents, who are not wedded to game theory, can use alternative, and less constrained, models of rationality to develop different agent architectures.
The comparative disadvantages are:
- the models often select outcomes (deals) that are sub-optimal; this is because they adopt an approximate notion of rationality and because they do not examine the full space of possible outcomes;
- the models need extensive evaluation, typically through simulations and empirical analysis, since it is usually impossible to predict precisely how the system and the constituent agents will behave in a wide variety of circumstances.

4, Argumentation-based approaches
The basic idea is to allow additional information to be exchanged, over and above proposals. This information can be of a number of different forms, all of which are arguments which explain explicitly the opinion of the agent making the argument.

In addition to rejecting a proposal, an agent can:
- Offer a critique of the proposal, explaining why it is unacceptable (thus identifying an entire area of the negotiation space as being not worth exploring by the other agent);
- Accompany a proposal with an argument which says why the other agent should accept it (thus changing the other agent’s region of acceptability).

… Agents may not be truthful in the arguments that they generate. Thus, when evaluating an argument, the recipient needs to assess the argument on its own merits and then modify this by its own perception of the argument’s degree of credibility in order to work out how to respond.

…Using argumentation in real agents means handling the complexities of the agents’ mental attitudes, communication between agents, and the integration of the argumentation mechanisms into a complex agent architecture [3].

For the future, two main areas of work remain:
1. The definition of suitable argumentation protocols, that is, sets of rules that specify how agents generate and respond to arguments based upon what they know. [6, 7]
2. The transition between the underlying negotiation protocol and the argumentation protocol. When is the right time to make this transition, when is it right to start an argument?

… the problem with such methods is that they add considerable overheads to the negotiation process, not least in the construction and evaluation of arguments…

5, Conclusions
Much research still needs to be performed in the area of automated negotiation, including:
- Extending and developing the specific approaches that have been discussed herein and even developing new methods…
- Development of a best practice repository for negotiation techniques. That is, a coherent resource that describes which negotiation techniques are best suited to a given type of problem or domain (much like the way that design patterns function in object-oriented analysis and design)…
- Advancing work on knowledge elicitation and acquisition for negotiation behaviour. At present, there is virtually no work on how a user can instruct an agent to negotiate on their behalf…
- Developing work on producing predictable negotiation behaviour…

Monday 22 January 2007

4, Games That Agents Play

Notes taken from ‘Games that agents play: A formal framework for dialogues between autonomous agents’ (2001), by Peter McBurney and Simon Parsons

“… Our ultimate objective in this work is to represent complex dialogues occurrences which may involve more than one atomic type, e.g. dialogues which may contain sub-dialogues embedded within them…”

The major contribution of this paper has been to:
- Present a logic-based formalism for modelling of dialogues between intelligent and autonomous software agents.
- Build on a theory of abstract dialogue games.
- Enable representation of complex dialogues as sequences of moves in a combination of dialogue games.
- Allow dialogues to be embedded inside one another.
- Enable different types of dialogues to be represented, because of its modular nature.
- Develop a formal and potentially-generative language for dialogues between autonomous agents which admits combinations of different types of dialogues.
- Extend previous work in formalising generic dialogue game protocols.
- Present a single, unifying framework for representing disparate types of dialogue, including those in the typology of [36].

1, Introduction
Autonomous agents interact to achieve individual or group objectives, on the basis of possibly different sets of assumptions, beliefs, preferences and objectives.

2, Dialogues and Dialogue Games
Types of dialogue (as in [36] – based upon the information the participants have at the commencement of a dialogue, their individual goals for the dialogue, and the goals they share):
- Information-Seeking Dialogues are those where one participant seeks the answer to some question(s) from another participant, who is believed by the first to know the answer(s).
- In Inquiry Dialogues the participants collaborate to answer some question or questions whose answers are not known to any one participant.
- Persuasion Dialogues involve one participant seeking to persuade another to accept a proposition he or she does not currently endorse.
- In Negotiation Dialogues, the participants bargain over the division of some scarce resources.
- Participants of Deliberation Dialogues collaborate to decide what action or course of action should be adopted in some situation.

Dialogue games are interactions between two or more players where each player “moves” by making utterances. The components/rules are:
- Commencement Rules, which define the circumstances under which the dialogue commences.
- Locutions, which indicate what utterances are permitted, e.g., assert propositions, question/contest prior assertions, justify assertions.
- Combination Rules, which define the dialogical contexts under which particular locutions are permitted or not, or obligatory or not.
- Commitment, which define the circumstances under which participants express commitment to a proposition.
- Termination Rules, which define the circumstances under which the dialogue ends.

… We suggest that agent dialogue protocols should be defined in purely syntactical terms, so that conformance with the protocol may always be verified by observing actual agent utterances (externalisation)…

… We distinguish between dialogical commitments, which incur burdens on the speaker only inside the dialogue, and semantic commitments, which incur burdens on the speaker in the world beyond the dialogue…

3, Formal Dialogue Frameworks
We present a three-level hierarchical formalism for agent dialogues:
- At the lowest level, the topics which are the subjects of dialogues;
- The dialogues themselves – instantiations of persuasions, inquiries, etc, and combinations of these – which we represent by means of formal dialogue games.
- At the highest level, control dialogues, where agents decide which dialogues to enter, if any.

… no particular dialogue may commence without the consent of all those agents participating.

… every dialogue game has a legal locution which proposes to the participants that they interrupt the current dialogue and return to the Control Layer.

… A dialogue may terminate when all participants agree to terminate it (or, for example, one participant, or only when a majority wish to do so). This may occur even though the dialogue may not yet have ended, for instance, when a persuasion dialogue does not result in all the participants accepting the proposition at issue…

Dialogues about dialogues: Because our application domain involves consenting agents, the selection of the dialogue-type may itself be the subject of debate between the agents concerned.

Treat G and H as dialogues, then the Dialogue Combinations are as follows: Iteration (n-fold repetition of G), Sequencing (G immediately followed by H), Parallelization (G and H undertaken simultaneously), Embedding (undertaking H within G), Testing (to assess the truth-status of some proposition which has become the subject of contention in a dialogue, and which makes reference to the world external to that dialogue, e.g., interrogation of a database or conduct of a scientific experiment).

… For conflicts between semantic commitments from different dialogue occurrences, the dialogue participants may have different opinions on the appropriate form of resolution. For example… commitments from earlier (later) dialogues should take precedence over those from later (earlier) ones…

If we had generative mechanisms for each of the atomic dialogue-types, then we would have them for all dialogue types, by simple inspection of the Dialogue Combination Rules…

4, Example
We illustrate the framework with a dialogue occurrence between a potential buyer and potential seller of used motor cars…

… whether or not a particular type of sub-dialogue is appropriate at a specific place in a larger dialogue should be a matter for the participants to the dialogues to decide at the time. The formalism we have presented here enables such decisions to be made mutually and contextually.

Wednesday 17 January 2007

Negotiation and Why agents would want to share their knowledge

Thoughts following on from 11 January’s supervisor meeting.

Why would agents want to share knowledge? Naturally, sharing knowledge expands the knowledge-base of each agent. But sharing knowledge (through argumentation) also has other benefits, which include the eradication of false beliefs and resolution of conflicts.

As a particular example of the benefit of sharing knowledge and argumentation consider the 2001 paper ‘Dialogues for Negotiation: Agent Varieties and Dialogue Sequences’ [1]. The work presents a “formal, logic-based approach to one-to-one agent negotiation, in the context of goal achievement in systems of agents with limited resource availability”. The solution proposed is “based on agent dialogues, as a way of requesting resources, proposing resource exchanges, and suggesting alternative resources”. The paper also mentions of agent dialogues – in passing – two performatives that have an argumentative dimension:
- challenge – used to ask a reason (justification) for a past move.
- justify – used to justify a past move by means of a support.
These two performatives lay down the basis for extending and further generalising the proposed agent negotiation.

Argumentative approaches to agent negotiation (via dialogue) already exist [2, 3, 4, 5], some of which are “in a way more general” than the work presented in [1]. The real challenge, however, is to generalise the negotiation of [1] while retaining the logic-based properties (that lend themselves to theoretically provable results) and also not to make it less operational.

Dialogues are modelled by logic-based dialogue constraints, which are (possibly non-ground) if-then rules contained in the knowledge-base of agents of the form:
p(T) & C => p’(T + 1),
where p is the received performative (the trigger), p’ is the uttered performative (the next move) and C is a conjunction of literals in the language of the knowledge-base of the agent (the condition of the dialogue constraint). Intuitively, the dialogue constraints of an agent express policies between it and other agents. Currently, these policies remain unknown to other agents. Thus, in making or responding to requests, the reasoning behind decisions remains largely unknown. Further, allowing argumentation (challenges and justifications) to follow requests and responses has the potential to increase the number of possible deals and better the possible results. This will be demonstrated after a brief mention of the other aspects of the knowledge-base:
- domain-specific as well as domain-independent beliefs that the agent can use to generate plans for its goal, such as the knowledge describing preconditions and effects of actions;
- information about the resources available to the agent;
- information about the dialogues in which the agent has taken part in;
- information on the selected intention, consisting of the given goal, a plan for the given goal, as well as the set of resources already available to the agent and the set of missing resources, both required for the plan to be executable.

The purpose of the negotiation is for the agent to obtain the missing resources, while retaining the available ones that are necessary for the plan in its current intention. To illustrate the advantages of argumentation (and knowledge-sharing) in conflict resolution, consider three agents (a1, a2, a3) and five resources (r1, r2, r3, r4, r5) as follows:

a1: resources = {r1, r2}, intention = { plan(p1), available(r1), missing(r3), goal(g1) }
a2: resources = {r3}, intention = { plan(p2), available(r3), missing(r1), goal(g2) }
a3: resources = {r4, r5}, intention = { plan(p3), available(r5), missing(), goal(g3) }

In this example, the agent a1 needs resources r1 and r3 to make its plan p1 executable. Currently it has r1 but not r3. Further, it has another resource r2 that it does not need. In order for a1 to obtain r3, which is held by a2, it needs to make a request for it. However, r3 is also needed by a2 according to its own current intention and thus a2 will refuse a1’s request. Similarly, a2 requires r1 according to its current intention but cannot obtain it since it is also required by a1, which currently holds it.

Since neither a1 nor a2 can proceed there needs to be some sort of conflict resolution. One way of achieving this would be for a1, who made the request, to change its plan and try to fulfil its goal with a new plan that needs a different set of (obtainable) resources. Alternatively, a1 could challenge a2 as to why it refused, and based on the justification try to convince a2 to give up the resource r3.

Suppose that a1 makes a request to a2 for resource r3, which is refused. Assuming that a1 cannot form an alternative plan it challenges the refusal. a2 justifies the refusal by declaring that it needs r3 as part of its current plan p2 to achieve the goal g2. Suppose also that there is an alternative plan p2’ to p2 for the goal g2 that involves resources r2 and r4, and a1 knows this. Now a1 has r2 in its possession and can obtain r4 from a3, and can thus propose that a2 change its plan to p2’, accept resources r2 and r4, and hand over r3 in the exchange; thus resolving the conflict.

No assumption is made on how plans for intentions are generated. It is assumed, however, that the intention is such that the plan it contains allows the goal to be achieved. As well as conflict resolution illustrated in the example above, argumentation could also be used as a means of convincing agents that their intentions are impossible to achieve. By doing so, agents can potentially convince each other to modify their plans/intentions and thus agree to more resource exchanges than would otherwise be possible.

As an illustration of argumentation used to eradicate false beliefs or infeasible plans, consider the three agents and five resources from the previous example. Assume that r1 is required for a1 to achieve its goal of g1 regardless of the chosen plan (i.e. it will never give it up). A request from a1 to a2 for r3 will result in refusal since a2 also needs r3. a1 can follow this refusal with a challenge questioning the refusal. a2 will respond justifying its need for r3 (and r1) to carry out its plan p2. Since r1 is indefinitely unobtainable, a1 can then follow the justification notifying a2 of this and suggesting that a2 change its plan. This will cause a2 to change its plan and may result in a2 handing r3 to a1 as required.

Potentially agents should be able to share and modify any aspect of their knowledge-base, including the dialogue constraints. In the case of dialogue constraints, sharing this would be useful to understand what response an agent would give under certain conditions or to understand the conditions that gave rise to a certain response. In the case of sharing dialogue history and knowledge of resource holders, this would be useful to avoid redundant communication. Sharing intentions has benefits as illustrated in the examples above.

Sunday 14 January 2007

Initial Thoughts and Plans

Thoughts following on from 18 December’s supervisor meeting.

The plan for the near future is to present an argumentative approach to sharing knowledge between collaborative agents. The envisioned approach will be based on sequences of dialogues. Each sequence of dialogues, between two or more agents, seeks to construct or attack a belief/standpoint. An agent may hold a belief until such time that it is put into question (i.e. challenged or attacked) and unsuccessfully defended. A challenge may be to simply express doubt with the intention of understanding the other agent’s standpoint, or to attack the other agent’s beliefs in promotion of an alternative standpoint. In the former case, an unsuccessful defence will result in retraction of the current standpoint. In the latter case, an unsuccessful defence will result in adoption of the promoted standpoint.

A dialogue takes place between two agents and its form will vary depending on whether a belief is being constructed or attacked. In the former case, a dialogue will consist of an enquiry followed by a response. In the latter case, a dialogue will consist of a challenge followed by a justification or acknowledgement of defeat.

The knowledge-base (beliefs) of agents will consist of facts, rules and assumptions; all shareable and defeasible. Intra- and inter-agent knowledge may be conflicting, and the process of sharing knowledge through argumentative discourse aims to resolve these conflicts. Though the joint knowledge of agents may be conflicting a joint argument (built on one line of reasoning) may not be.

As an example of a dialogue used to construct a belief, consider two agents (a1, a2) as follows:
a1 believes 'p holds if q and r hold' and 'q holds'.
a2 assumes 'r holds'.
An enquiry from a1 to a2 in the form “Does p hold given my belief ‘p holds if q and r hold’?” will result in a response from a2 in the form “I do not know if p holds, but I am currently assuming r holds”.
As a result of this dialogue, a1 can choose to adopt a positive standpoint for p given its belief of q and a2’s assumption on r.

As an example of a dialogue used to attack a belief, consider two agents (a1, a3) as follows:
a1 believes 'p holds if q and r hold' and 'q holds', and assumes 'r holds' allowing it to hold a positive standpoint for p.
a3 believes 'r does not hold'.
A challenge from a3 in the form “What is your basis for believing p given that r does not hold?” will result in either a1 finding and presenting another basis for p or acknowledging that it has no sound basis for its standpoint on p.

As an example of a sequence of dialogues used to construct a belief, consider three agents (a1, a4, a5) as follows:
a1 believes 'p holds if q and r hold' and 'q holds'.
a4 believes 'p holds if q and not s hold'.
a5 believes 's does not hold'.
An enquiry from a1 to a4 in the form “Does p hold given my belief ‘p holds if q and r hold’?” will result in a response from a4 in the form “I do not know if p holds, but I do know that ‘p holds if q and not s hold'." This will spark a further enquiry from a1 to a5 in the form “Does p hold given my beliefs ‘p holds if q and r hold’ and ‘p holds if q and not s hold’?” This enquiry will result in a response from a5 in the form “I do not know if p holds, but I do know that s does not hold”. This sequence of dialogues allows a1 to construct a positive standpoint for p on the basis of its belief that 'q holds', a4’s belief that 'p holds if q and not s hold' and a5’s belief that 's does not hold'.

The work will initially assume cooperativeness between agents ignoring elements such as deceitfulness, trust and reliability. Such elements will be considered later to bring in notions of competitiveness. Other open areas include:
- The notions of attack and defence (undercutting, rebuttal etc), and issues surrounding the semantics of beliefs (admissibility etc).
- What is an assumption, a factual truth and a rule? On what basis are they formed and defeated?
- The structure of arguments.
- The communication language and communication protocols: How does an agent know who to contact for particular knowledge? Should dialogues be restricted to one-to-one?
- Agent varieties - tying in with issues of cooperativeness and competitiveness: Are all agents the same with respect to how they enquire, respond, challenge and justify?

Thursday 11 January 2007

3, Trust in Multi-Agent Systems

Notes taken from 'Trust in Multi-Agent Systems' (2004), by Sarvapali D. Ramchurn et al.

1, Introduction
Trust - a belief that the other party will do what it says it will (being honest and reliable) or reciprocate (being reciprocative for the common good of both), given an oppurtunity to defect to get higher payoffs.
Individual-Level Trust - whereby an agent has some beliefs about the honesty or reciprocative nature of its interaction partners.
System-Level Trust - whereby the actors in the system are forced to be trustworthy by the rules of encounter (i.e. protocols and mechanisms) that regulate the system.

2, Individual-Level Trust
Trust-Models at the Individual Level - classified as either learning (and evolution) based, reputation based, or socio-cognitive based.
Learning and Evolving Trust - an emergent property of direct interactions between self-interested agents. We assume that agents will interact many times rather than through one-shot interactions.
Reputation Models - the opinion or view of someone about something, mainly derived from an aggregation of opinion of members of the community about one of them.
Socio-Cognitive Models of Trust - forming beliefs according to the assessment of the environment and the opponent's characteristics which could also include an analysis of past interactions.

3, System-Level Trust
Open Multi-Agent Systems - agents interact via a number of mechanisms or protocols that dictate the rules of encounter, e.g., auctions, voting, contract-nets, bargaining, market mechanisms etc.
System-Level Trust - subdivided in terms of:
i, devising truth-eliciting interaction protocols.
ii, developing reputation mechanisms that foster trustworthy behaviour.
iii, developing security mechanisms that ensure new entrants can be trusted.

4, Discussion and Conclusion
A classification of approaches to trust in multi-agent systems:

Individual-Level (Socio-cognitive/Reputation/Evolutionary-and-Learning models) -> Reasoning -> TRUST <- Actions <- System-Level (Trustworthy-Interaction/Reputation/Distributed-Security mechanisms)

2, The Uses of Argument

"In 1958, Stephen Toulmin introduced a conceptual model of argumentation. He considered a pictorial representation for logical arguments, in which four parts are distinguished: claim, warrant (a non-deterministic reason which allows the claim), datum (the evidence needed for using the warrant), and backing (the grounds underlying the reason). Counterarguments are also arguments which may attack any of the four preceding elements. By chaining arguments a disputation can be visualised [applied later - 1991]. Today, Toulmin's work is essentially of historic interest." (taken from 'Logical Models of Argument' - Carlos Ivan Chesnevar - 2000)

Notes below taken from 'The Uses of Argument' (1958), by Stephen Toulmin

page 99 - D (data), So C (claim/conclusion), since W (warrant)
e.g. "Harry was born in Bermuda", So "Harry is a British subject", since "A man born in Bermuda will be a British subject".

page 101 - D, So, Q (qualifier), C, since W, unless R (rebuttal)
e.g. "Harry was born in Bermuda", So, "presumably", "Harry is a British subject", since "A man born in Bermuda will be a British subject", unless "Both his parents were aliens / he has become a naturalised American / ..."

page 104 - D, So, Q, C, since W, on account of B (backing), unless R
e.g. "Harry was born in Bermuda", So, "presumably", "Harry is a British subject", since "A man born in Bermuda will be a British subject", on account of "the following statutes and other legal provisions...", unless "Both his parents were aliens / he has become a naturalised American / ..."

However exhaustive the evidence provided by D and B together, the step from these to the conclusion C is not an analytic one.

Logical Gulf - the transition of logical type involved in passing from D and B on the one hand to C on the other. The epistemological question is what can be done about this gulf? Can we bridge it? Need we bridge it? Or must we learn to get along without bridging it?

Wednesday 10 January 2007

1.6, Evaluation: The Soundness of Argumentation

Notes taken from ‘Argumentation: Analysis, Evaluation, Presentation’, by Frans van Eemeren et al.

1, Evaluating Argumentative Discourse
Defective Argumentative Discourse: Due to contradictions in the argument as a whole, or individual arguments may be unacceptable or otherwise flawed.
Unacceptability of a Part of the Argumentation: Different consequences for the different types of argumentation (i.e. multiple, coordinative, subordinative).
Assessing the Soundness of Argumentation: All complex argumentation must be broken down into single arguments, each of which must be assessed. It is advisable, however, not to proceed to the assessment of the individual arguments before determining whether the argumentation as a whole is consistent.
Logical Inconsistency: When statements are made that, because they contradict each other cannot possibly both be true.
Pragmatic Inconsistency: When argumentation contains two statements that, although not logically inconsistent, have consequences in the real world that are contradictory.
Soundness of a Single Argument: The argument must be judged according to the degree to which it justifies (or refutes) the proposition to which the standpoint refers. To be considered sound, it must meet three requirements:
i, Each of the statements that make up the argument must be acceptable;
ii, the reasoning underlying the argument must be valid;
iii, the “argument scheme” employed must be appropriate and correctly used.

2, the Acceptability of Argumentative Statements
There are statements whose acceptability can be established with no problem. Examples of these are factual statements whose truth can be verified. The acceptability of nonfactual statements can also sometimes be agreed on quickly, for instance, when they concern commonplace values or judgements (e.g. “Parents should take care of their children”). Of course, in many other instances it is very difficult to agree on the acceptability of a statement, particularly if it involves a complex matter or is strongly tied to particular values and norms (e.g. “Reading is (not) the best way to improve your language skills”). If such statements are not supported by further argumentation, the speaker’s argumentation as a whole may not be accepted as an adequate defence (or refutation) of the standpoint.

3, the Validity of the Reasoning
There is only one situation in which a single argument cannot be reconstructed as being based on valid reasoning, and that is if invalid reasoning is put forward explicitly. Reasoning that is incomplete can almost always be completed in a way that renders it logically valid. If a premise has been left unexpressed, the solution is simply to add to the argument the appropriate “if… then…” statement. However odd the resulting statement may be, the reasoning is valid.
Modus Ponens: (1) “If A, then B”, (2) “A”, therefore (3) “B”.
Modus Tollens: (1) “If A, then B”, (2) “Not B”, therefore (3) “Not A”.

4, the Use of Argument Schemes
Argument Scheme: Links the arguments and the standpoint being defended in a specific way. May or may not be done correctly.
Types of Argumentation: Three different types characterised by three main categories of argument schemes: symptomatic, analytic and causal.
Critical Questions: Asked to determine whether a given argument meets the criteria relevant to that type of argumentation.

5, Argumentation Based on a Symptomatic Relation
General argument scheme: “Y is true of X”, because “Z is true of X”, and “Z is symptomatic of Y”.
Critical Questions: “Aren’t there also other non-Y’s that have the characteristic Z?” “Aren’t there also other Y’s that do not have the characteristic Z?”

6, Argumentation Based on a Relation of Analogy
General argument scheme: “Y is true of X”, because “Y is true of Z”, and “Z is comparable to X”.
Critical Questions: “Are there any significant differences between Z and X?”

7, Argumentation Based on a Causal Relation
General argument scheme: “Y is true of X”, because “Z is true of X”, and “Z leads to Y”.
Critical Questions: “Does Z always lead to Y?”

8, the Presentation of Different Types of Argumentation
… Sometimes it is easy to determine the type of argumentation because of the presence of certain expressions that indicate what the relation is between the argument and the standpoint…
Signs of a Symptomatic Relation: “It is characteristic of adolescents that they are rebellious”, “It is typical of…”, “It is natural for…”, “Adolescents are rebellious” etc.
Signs of an Analytic Relation: “The movement towards democracy of the 1960s is like the French revolution”, “… is comparable to…”, “… is similar to…”, “… corresponds to…”, “… is related to…”, “… is reminiscent of…”
Signs of a Causal Relation: “Drinking a whole bottle of whiskey has the inevitable result that you get drunk”, “… leads to…”, “You always get drunk from…”, “… can’t help but make you…”

Tuesday 9 January 2007

1.5, Analysis: The Structure of Argumentation

Notes taken from ‘Argumentation: Analysis, Evaluation, Presentation’, by Frans van Eemeren et al.

1, Single Arguments
The defence of a standpoint often consists of more than a single argument. Several single arguments can be combined and arranged in a number of different ways to form the defence of a standpoint…
In the simplest case, a defence consists of one single argument, that is, an argument in fully explicit form consists of two and only two premises. Usually, one of these is unexpressed, so that the single argument appears to consist of only one premise...

2, Multiple, Coordinative, and Subordinative Argumentation
Multiple Argumentation: Consists of alternative defences of the same standpoint. These defences do not depend on each other to support the standpoint and are, in principle, of equal weight.
Coordinative Argumentation: One single attempt at defending the standpoint that consists of a combination of arguments that must be taken together to constitute a conclusive defence.
Subordinative Argumentation: Arguments are given for arguments. The defence of the initial standpoint is made layer after layer.

3, The Complexity of the Argumentation Structure
Argumentation can be of greater or lesser complexity, depending on the number of single arguments it consists of and the relationship between these arguments. The number of arguments that need to be advanced depends, among other things, on the nature of the difference of opinion.
Reasons for Multiple Argumentation: The protagonist anticipates that one or more of the attempts to defend the standpoint might be unsuccessful. Also, acceptability is a matter of degree; the additional arguments may raise the level of acceptance.

4, Representing the Argumentation Structure Schematically
Complex argumentation can always be broken down into a number of single arguments. And that is exactly what happens when the argumentation structure is analysed…
Single Argument: First assigned the number of the standpoint to which it refers (e.g., number 2), followed by a number of its own (e.g., 2.1). An unexpressed premise that has been made explicit is given in parenthesis and is assigned a number followed by an apostrophe (‘) (e.g., 2.1’).
Multiple Argument: Each argument is assigned the number of the standpoint followed by a number of its own: 2.1, 2.2, 2.3, and so on.
Coordinate Argumentation: The single arguments are all assigned the same number, followed by a letter (2.2a, 2.2b, 2.2c, etc).
Subordinative Argumentation: Indicated by two or more decimal points (e.g., 2.1.1 or 2.1.1a or 2.1.1’).

5, The Presentation of Complex Argumentation
The protagonist almost never explicitly indicates how the argument is structured. There are, however, certain words and expressions that may serve as indicators of different types of structure.

6, a Maximally Argumentative Analysis
It is important to determine whether the argumentation is coordinative or multiple… In truly ambiguous cases, it is preferable to opt for an analysis as multiple argumentation… If each of several single arguments by itself is sufficient to defend the standpoint, then argumentation consisting of two or more such arguments must be unassailable. And if one of these arguments is undermined, it does not do irreparable damage to the defence.

7, Unexpressed Premises and Complex Argumentation
It is preferable when making unexpressed premises explicit to assume that for every incomplete single argument there is one unexpressed premise. When the context is well-defined, it is usually possible to further specify the unexpressed premise. It may even turn out that a whole chain of subordinative arguments was implied and can now be reconstructed.

Monday 8 January 2007

1.4, Analysis: Unexpressed Standpoints and Unexpressed Premises

Notes taken from ‘Argumentation: Analysis, Evaluation, Presentation’, by Frans van Eemeren et al.

1, Implicit Elements in Argumentative Discourse
Unexpressed: Elements (premises or standpoints) that are intentionally omitted but implicitly present in the argumentation.

2, Indirectness and the Rules for Communication
“Ordinary” Implicit Language Use: No attempt to convey something additional in a roundabout way. For example, a salesperson says “It’s 170” instead of “I inform you that the price of that suit is 170 dollars”.
Indirect Language: A special kind of implicit language use, where the speaker says what he means in a roundabout way. Examples of this are unexpressed premises and unexpressed standpoints. For example, someone may say “Would it be too much trouble to take this package to the post office?” while also meaning to request that the listener do the job.
Communication Principle: Followed when people want to communicate with each other. According to this principle, people who are communicating with each other generally try to make their contributions to the communication match, as much as possible, the purpose of their communication.
Rules for Communication: Observed to fulfil the Communication Principle. The most important rules, for whatever is said or written, are:
i, Clarity: It should be as easy to understand as possible.
ii, Sincerity: It must not be insincere.
iii, Efficiency: It should not be redundant or pointless.
iv, Relevancy: It must appropriately connect with what has gone before.

3, Correctness Conditions For Speech Acts
Speech Acts: Examples of this are announcements, promises, explanations or defending a standpoint. The communications rules must always be observed.
Observing the Rules: Meaning of this varies according to which speech act is performed. For a promise, the rule “Be sincere” requires that speakers must really intend to do what they promise. For a request, they must sincerely wish the listener to comply with the request.
Correctness Conditions: A precise description of what it means for each speech act to follow the Communication Principle, in the form of specific conditions that each kind of speech act must meet.
Preparatory Conditions: What the speaker must do in order to follow the efficiency rule. For argumentation, the speaker must believe that the listener
i, does not already fully accept the standpoint.
ii, will accept the statements used in the argumentation.
iii, will view the argumentation as an acceptable defence (or refutation) of the proposition to which the standpoint refers.
Responsibility Conditions: Describe what the speaker must believe in order to follow the sincerity rule. For argumentation, the speaker must believe that
i, the standpoint is acceptable.
ii, the argumentation used in the argumentation are acceptable.
iii, the argumentation is an acceptable defence (or refutation) of the proposition to which the standpoint refers.

4, Violations of the Communication Rules
“Rationalising” Tendency: When one of the communication rules have been violated without it being the case that the speaker has abandoned the Communication Principle, then the listener tries to interpret the speaker’s words in such a way that the apparent violation acquires a plausible meaning. This is exactly what happens in indirectness.

5, Different Forms of Indirectness
Clarity Rule: Listeners can assume that it is possible for them to figure out the speaker’s meaning. A promise expressed vaguely or unclearly can be interpreted as an indirect expression of reluctance or even refusal: “I’ll fix that coffee grinder soon, God-willing.”
Sincerity Rule: Listeners can assume that the speaker means what he says. By saying something obviously insincere, the speaker can ironically (and indirectly) convey the opposite of what he or she actually says: “So you didn’t recognise him? He must have been flattered.”
Efficiency Rule: Listeners can assume that whatever a speaker says is not flawed in respect of redundancy or pointlessness. A pointless question – because it has no answer – can be used to indirectly express a complaint: “When will I ever find happiness?”
Relevance Rule: A response that obviously does not connect up with what has just been said can be used to convey that the speaker refuses to discuss the topic.

6, Making Unexpressed Standpoints Explicit
Even if speakers do not explicitly express their standpoint, as a rule, they expect the listener to be able to infer (by means of valid reasoning and logic) this standpoint from the arguments put forward. If there is more than one possibility, one should choose the standpoint that in the light of the context and background information is most in accordance with all the communication rules.

7, Making Unexpressed Premises Explicit
Unexpressed premises are made explicit with the aid of the Communication Principle, the communication rules and logic.
Modus Ponens: A logically valid form of reasoning. Given a rule “If p, then q” and given “p”, then “q” logically follows. In a constructive critical analysis of argumentation, the reasoning underlying the argumentation can sometimes be made valid by supplementing it with an “if… then…” statement.

8, Unexpressed Premises in a Well-defined Context
The context may be so well defined that it demands a specific phrasing of the unexpressed premise. If a non-specific interpretation entails attributing to the speaker a violation of the communication rules, then one should check whether the context also allows another, more specific interpretation that does not entail a violation.