Wednesday, 28 February 2007

Inquiry Dialogue System; Dialectical Proof Procedures

Thoughts following on from 27 February’s supervisor meeting.

‘A Generative Inquiry Dialogue System’ (2007 – Elizabeth Black, Anthony Hunter)

“… A commitment store is a set of beliefs that the agent has asserted so far in the course of the dialogue…” Can commitments/assertions be retracted if not relevant to the final argument construction? Can something like provisional commitments be included?

This isn’t really necessary. Once a commitment has been made public it does not make sense to retract it unless it is proven to be wrong, which is a different subject altogether.

“An agent’s commitment store grows monotonically over time. If an agent makes a move asserting an argument, every element of the support is added to the agent’s commitment store. This is the only time the commitment store is updated…” Isn’t there a risk here of irrelevance and non-minimality of the final main argument?

Yes, because a chosen path may prove to be wrong and assertions may have been made along that path. However, not much can be done about this.

“In order to select a single one of the legal assert or open moves, we assign a unique number to the move content and carry out a comparison of these numbers.” Do these numbers or comparisons have any meaning in terms of betterness and priority? In other words, does it mean anything for one move to be chosen or preferred over another?

No, this is not the point, as explained below.

“Let us assume that B is composed of a finite number Z of atoms. Let us also assume that there is a registration function ‘mu’ over these atoms. So, for a literal L, mu(L) returns a unique single digit number base Z… For a rule ‘L1 ^ … ^ Ln -> Ln+1’, mu(L1 ^ … ^ Ln -> Ln+1) is an n+1 digit number of the form mu(L1)…mu(Ln)mu(Ln+1). This gives a unique base Z number for each formula in B and allows us to choose a unique open move (i.e. the move whose mu value is least).” What are these functions all about? Where did they come from? Do they have any meaning? Or do they have an element of randomness about them? On what basis are numbers assigned to atoms, given that a lower value (of an atom, rule or argument) is chosen by the selection function in favour of a higher value? Do you know if these assigned numbers (and comparisons) have any meaning in terms of preference and priority? Or do they have an element of randomness about them?

As far as I understand, they have this feature to guarantee determinism in the various selections (Pick) that they have. It is a bit long winded and unnecessary. Any other mechanism to do so would also work. A simpler method, such as selecting the first item in a list, would do the job just as fine. They need determinism as this is one of their requirements for the strategy (and I also like this for dialogue constraints - see ATAL 2001 paper).

“Note that a top-level argument inquiry dialogue will always have a defeasible fact as its topic, but each of its sub-dialogues will always have a defeasible rule as its topic.” Sub-dialogues may only open rules (that are present in the agent’s belief), is this constraint necessary?

Perhaps not, especially where privacy and minimising communication is concerned. Rather than opening for a rule like ‘a -> b’ where ‘a’ needs to be established, in my opinion, the agent would be better by more simply initiating a fresh inquiry for ‘a’. This way it would not declare its intent to establish the rule ‘a -> b’ (if that is a problem).

Another point to note is that opening for a rule (like ‘a -> b’) does not add the rule to the commitment store of the agent according to the paper. However, this is absurd, since an agent would not open for a rule if it did not believe it do be true. Thus, in my opinion, opening for a rule should add that rule to the commitment store of the agent.

“… Note that we assume some higher-level planning component that guides the agent when deciding whether to enter into a dialogue, who this dialogue should be with and on what topic, i.e. that makes the decision to make the move m1.” But isn’t this the job of the agent strategy?

No. The agent strategy once takes over once these decisions have been made.

“We define the outcome of an argument inquiry dialogue as the set of all arguments that can be constructed from the union of the commitment stores and whose claims are in the question store.” This is still a lot of post-dialogue work to do, as not all commitments will be relevant and may even be contradictory.

Yes, but this is unavoidable.

‘Dialectical Proof Procedures For Assumption-Based, Admissible Argumentation’ (2005 – P. M. Dung, R. A. Kowalski, F. Toni)

This all seems to be going on inside the head of a single agent (with a single knowledge-base)!? What if the knowledge-bases are different?

Work in this is lacking and this is one area where my PhD will step in.

“(3ii) If ‘delta’ is not an assumption, then there exists some inference rule S/’delta’ (that is a member of R) and there exists exactly one child of N, which…” There may be many such rules, so which one does it pick and why? Why only one when that selection can potentially lead to failure?

At an implementation and search space level this could be achieved by means of allowing backtracking, as in Prolog.

“… either (4ia) ‘sigma’ is ignored and there exists exactly one child of N, which…or (4ib) ‘sigma’ is a culprit, and there exists exactly one child of N, which…” Ignored? Culprit? Why this sentence? On what basis? How can it know before trying? What if it’s wrong and it can’t counter-attack this sentence?

Again, as in the previous response, it does not know until it tries. The selection would be a result of some decision-making in the search space of possible moves.

“(5) There is no infinite sequence of consecutive nodes all of which are proponent nodes.” Why is this important and why not for the opponent as well? How can the procedure tell that a branch will be infinite?

Don’t know. This will be looked into for the next meeting.

“… Empty multi-sets O, however, can occur in failed parts of the search space, either because…” What does it mean for a part of the search to be failed? Does that mean the proponent’s overall counter-attack effort is unsuccessful, or that the proponent made an amendable wrong move somewhere along the way?

Note here that the failing is in the search space and not the tree itself, which allows for backtracking. If all possible routes in the search space fail only then will the proponent be unable to successfully counter-attack the opponent.

“… in addition to nodes that simply add concrete inference steps, a concrete dispute tree may contain extra branches that represent potential arguments by the opponent that fail…” How about potential arguments by the proponent that fail (for the same reasons)?

This occurs for the opponent because all seemingly viable inference rules are selected (step 4ii). This would not occur for the proponent because “some inference rule” is selected (i.e. only one), and this selection is linked to the previous comments about ignoring assumptions, backtracking in the search space etc.

“(Fig. 3) Concrete dispute tree for p for the assumption-based framework with p <- p.” Why is this (infinite) concrete dispute tree admissible? “… whether the selection function is patient or not, a concrete dispute tree must expand all of the proponent’s potential arguments into complete arguments…” Why isn’t there such a condition for the opponent nodes?

Don’t know. This will be looked into for the next meeting.

“Finding a culprit is dealt with in case 4(i) of Definition 6.1, by choosing whether to ignore or counter-attack the selected assumption in an opponent node.” Choosing? How? “Finding a way of attacking the culprit is dealt with by the search strategy. Our definition leaves the search strategy undetermined.” Search strategy? What’s that?

The former pair of questions has been answered above. As for the latter pair of questions, I would presume that a search strategy defines how to proceed through the search space.

I understand the difference between and purpose of abstract and concrete trees, I think anyway. But don’t understand the purpose of dispute derivations and difference between it and dispute trees?

They do seem to be doing the same thing. However, in addition, the dispute derivations maintain a list of assumptions and culprits encountered so far. This allows for filtering of defences/culprits by defences/culprits, which has some interesting properties.

“(1i) If ‘sigma’ (that is a member of Pi) is selected then…” “(2) If S is selected in Oi and ‘sigma’ is selected in S then…” On what basis could/would a proposition be selected?

Any approach would do; for example, selecting the first proposition in a list.

“(1ii)… there exists some inference rule R/’sigma’…” There may be many such rules. Which one would be chosen? What if there is no such rule; is it then a non-admissible belief?

The former question has been answered above. As for the latter question, this would mean that the algorithm fails; implying that an initial argument cannot be built to support the conclusion or that a defensive argument cannot be built to counter-attack an opponent’s attack.

Miscellaneous discussion

Look into the Abductive LogIc Agent System (ALIAS). In particular, ‘Cooperation and Competition in ALIAS: A Logic Framework for Agents that Negotiate’ (2003).

You defined completeness of a procedure with respect to a semantics as follows: “If there is some ok outcome with respect to the semantics then the procedure finds it”. Does this (completeness) mean that the procedure can find all such viable solutions, or that the procedure will find at least one such solution but may not necessarily be able to find all possible solutions?

Correctness/soundness of a procedure with respect to a semantics:
1) Of success: if the procedure succeeds then the outcome is ok (according to the semantics).
2) Of failure: if the procedure fails then no outcome is ok (according to the semantics).

Completeness of a procedure with respect to a semantics is linked to soundness of failure: For every possible outcome, if the outcome is ok with respect to the semantics then the procedure finds it.

Also, I remember seeing mention of "strong" completeness somewhere previously. I can't remember where. Do you know of any such differentiation of "strong" and "weak" completeness?

It might have been related to selection functions: E.g., in strong completeness the selection function is chosen first before applying the procedure, in weak completeness it is chosen afterwards, when applying the procedure. Have a look at John Lloyd’s book on Foundations of Logic Programming for this.

Monday, 26 February 2007

8, Dialectic proof procedures for assumption-based, admissible argumentation

Notes taken from ‘Dialectical Proof Procedures For Assumption-Based, Admissible Argumentation’ (2005), by P. M. Dung, R. A. Kowalski, F. Toni

“We have presented three, successive refinements of dialectic proof procedures for the admissibility semantics of assumption-based frameworks. The proof procedures search for a winning strategy for a proponent, who argues to establish the admissibility of a belief, against an opponent, who attacks in every possible way the initial and defending arguments of the proponent…”

1, Introduction

Stable Semantics: Sanctions a belief if the belief is the conclusion of an argument whose set of supporting assumptions can be extended to a set of assumptions that both attacks every assumption not in the set, and does not attack itself.

Admissibility Semantics: Sanctions a belief if it is the conclusion of an argument whose set of supporting assumptions can be extended to a set of defending assumptions, which both counter-attacks every attack, and does not attack itself.

Because different agents can hold contrary beliefs the admissibility semantics is said to be credulous, rather than sceptical.

2, Admissibility for assumption-based argumentation frameworks

2.1, Deductive system: A pair (L, R) where…
2.2, Deduction of a conclusion ‘alpha’ based on a set of premises P: A sequence ‘beta1’, …, ‘betaM’ of sentences in L, where m>0 and ‘alpha’ = ‘betaM’, such that…

Flat frameworks: Assumptions do not occur as conclusions of inference rules.

2.3, Assumption-based framework: A tuple (L, R, A, ~) where…
2.4, Argument: A deduction whose premises are all assumptions.

2.5, The only way to attack an argument is to attack one of its assumptions:
- An argument ‘a’ attacks an argument ‘b’ iff ‘a’ attacks an assumption in the set of assumptions on which ‘b’ is based.
- An argument ‘a’ attacks an assumption ‘alpha’ iff the conclusion of a is the contrary ‘~alpha’ of ‘alpha’.

2.6, A set of assumptions A attacks a set of assumptions B iff there exists an argument ‘a’ based upon a set of assumptions A’ (that is a subset of A) which attacks an assumption in B.

“Rebuttal” attacks (where an argument attacks another argument by contradicting its conclusion) are reduced to “undermining” attacks (that depend solely on sets of assumptions)…

2.7, The attack relationship is the basis of the admissibility semantics for argumentation:
- A set of assumptions A is admissible iff (1) A attacks every set of assumptions that attacks A, and (2) A does not attack itself.
- A belief ‘alpha’ is admissible iff there exists an argument for ‘alpha’ based on a set of assumptions A0, and A0 is a subset of an admissible set A.

3, Simplified frameworks for assumption-based argumentation

We use simplified assumption-based frameworks of the form (L, R, A, ~) where:
- All sentences in L are atoms p, q, … or negations of atoms p, q, … (i.e. L is a set of literals).
- The set of assumptions A is a subset of the set of all literals that do not occur as the conclusion of any inference rule in R.
- The contrary of any assumption p is p; the contrary of any assumption p is p.

4, Tight arguments

4.1, Given a selection function, a backward argument of a conclusion ‘alpha’ based on (or supported by) a set of assumptions A is a sequence of multi-sets S1, …, Sm, where S1 = {alpha}, Sm = A, and for every 1 <= i < m…

Because all steps in a backward argument are relevant to the conclusion by construction, we also call backward arguments tight arguments.

A set of assumptions A is admissible iff:
1. for every tight argument ‘a’ that attacks A there exists a tight argument supported by A’ (that is a subset of A) that counter-attacks ‘a’, and
2. no A’ (that is a subset of A) supports a tight argument that attacks an assumption in A.

5, Abstract dispute trees

An abstract dispute tree can be viewed as an and-tree (“and” because it includes all attacks by the opponent against all proponent arguments in the tree).

The search space can be viewed as an and-or-tree (“or” because it includes all the alternative counter-attacks by the proponent against the opponent’s attacks).

5.1, An abstract dispute tree for an initial argument ‘a’ is a (possibly infinite) tree T such that…

Defence set of T: The set of all assumptions belonging to the proponent nodes in T.

5.2, An abstract dispute tree T is admissible iff no culprit in the argument of an opponent node belongs to the defence set of T.

If the opponent can attack the proponent using only the proponent’s own assumptions, then the proponent loses the dispute, because then the proponent attacks itself. However, to win the dispute, the proponent needs to identify and counter-attack in every attack of the opponent some culprit that does not belong to the proponent’s own defence.

Soundness: The defence set of an admissible dispute tree is admissible.

(Strong form of) Completeness: For any initial argument ‘a’ whose support set is contained in an admissible set A of assumptions, there exists a dispute tree for ‘a’ whose defence set A’ is contained in A.

Any dispute tree that has no infinitely long branches is an admissible dispute tree.

5.3, A framework is stratified iff there exists no infinite sequence of arguments ‘a1’, …, ‘an’, …, where for every n >= 1, ‘an+1’ attacks ‘an’.

6, Concrete dispute trees

6.1, Given a selection function, a concrete dispute tree for a sentence ‘alpha’ is a (possibly infinite) tree T such that…

6.2, A concrete dispute tree T for a sentence ‘alpha’ is admissible iff no culprit of an opponent belongs to the defence set of T.

“… our proof procedures generalise negation as failure in logic programming…”

Patient selection functions: Always choose non-assumption sentences in preference to assumptions… they wait until a complete argument has been constructed before beginning to attack it.

For every admissible concrete dispute tree constructed by means of a patient selection function, there exists a corresponding admissible abstract dispute tree with the same defence set. Conversely as well…

6.3, A framework is acyclic if there is a well-ordering of all sentences in the language of the framework such that, whenever a sentence belongs to the premise of an inference rule, then the sentence is lower in the ordering than the conclusion of the inference rule.

7, Dispute derivations

Dispute derivations are to dispute trees what backward arguments are to proof trees…

… Different selections give rise to different derivations, but do not affect completeness, because they simply represent different ways of generating the same dispute tree.

The frontier of a dispute tree is a set of proponent and opponent nodes labelled by multi-sets of sentences, representing steps of potential arguments. A dispute derivation represents the current state of this frontier, together with the set of defence assumptions Ai and culprits Ci generated so far, as a quadruple: [Pi, Oi, Ai, Ci]. The sets Ai and Ci are used to filter arguments…

7.1, Given a selection function, a dispute derivation of a defence set A for a sentence ‘alpha’ is a finite sequence of quadruples [P0, O0, A0, C0], …, [Pi, Oi, Ai, Ci], …, [Pn, On, An, Cn] where…”

For every dispute tree derivation of a defence set A for a sentence ‘alpha’, the defence set is admissible, and there exists some A’ (that is a subset of A) that supports an argument for ‘alpha’.

8, Algorithmic issues

9, Related Work
Our assumption-based approach has the following features, which distinguish it from all the abstract approaches:
- tight arguments are generated by reasoning backwards from conclusions to assumptions,
- partially constructed, potential arguments can be attacked before they are completed,
- the same counter-argument can be used to attack different arguments sharing the same assumption.

10, Conclusion

Monday, 19 February 2007

Thoughts on 'A Generative Inquiry Dialogue System'

Thoughts following on from 13 February’s supervisor meeting.

A Generative Inquiry Dialogue System (2007, Elizabeth Black and Anthony Hunter)

Given agents x1 and x2 with knowledge bases (KB) as follows:
KB(x1) = {(a), (~d), (b -> c)}
KB(x2) = {(e), (a ^ f -> b), (~d ^ e -> b)}

According to the Black & Hunter system an inquiry initiated by x2 to establish ‘c’ would generate a sequence of moves as follows:

(time, move, current question store)
1, [x1, open, (c)], cQS = {c}
2, [x2, open, (b -> c)], cQS = {b}
3, [x1, open, (a ^ f -> b)], cQS = {a, f}
4, [x2, assert, ({a}, a)], cQS = {f}
5, [x1, close, (a ^ f -> b)], cQS = {f}
6, [x2, close, (a ^ f -> b)], cQS = {b}
7, [x1, open, (~d ^ e -> b)], cQS = {~d, e}
8, [x2, assert, ({~d}, ~d)], cQS = {e}
9, [x1, assert, ({e}, e)], cQS = {}
10, [x2, close, (~d ^ e -> b)], cQS = {}
11, [x1, close, (~d ^ e -> b)], cQS = {b}
12, [x2, close, (b -> c)], cQS = {b}
13, [x1, assert, ({~d, e, ~d ^ e -> b}, b)], cQS = {}
14, [x2, close, (b -> c)], cQS = {}
15, [x1, close, (b -> c)], cQS = {c}
16, [x2, assert, ({b -> c, ~d ^ e -> b, ~d, e}, c)], cQS = {}
17, [x1, close, (c)], cQS = {}
18, [x2, close, (c)], cQS = {}

At the end of the inquiry, an argument for ‘c’ can be constructed and the commitment stores (CS) are as follows:
CS(x1) = {a, ~d, e, b -> c, ~d ^ e -> b}
CS(x2) = {~d, e, ~d ^ e -> b}

Using our system (yet to be defined) we wish to generate dialogues similar to the above. However, the above inquiry raises a number of questions:
- The final commitment stores may contain assertions that are irrelevant to the final argument. As with the assertion ‘a’ at t=4 in the above example.
- Agents can initiate sub-dialogues to unnecessarily establish beliefs that they already know to be true, for example ‘e’ at t=7, which agent x2 already believes to be true. This is unwanted since agents would wish to minimise the amount of information given out (particularly in the medical domain) and minimise inter-agent communication (possibly due to expense). This is a result of only allowing sub-dialogues to be opened for rules.
- As a result of the previous point, the initiating agent will have to unnecessarily assert and thus publicly commit to beliefs that it held to be true from the beginning.
- Agents still have a lot of post-inquiry work to do in order to filter out assertions that they made during the course of the dialogue which turned out to be irrelevant to the final top-level argument (e.g. ‘a’ at t=4). This is in order to build a final top-level argument that is both minimal and non-contradictory.

Thus, in our (ideal) system, we wish to:
- Minimise information given out.
- Allow for irrelevant/incorrect assertions/commitments to be retracted.

Further, we would hope to relax the assumptions of Black & Hunter’s work in order to:
- Provide the capability to fully support interactions such as multi-disciplinary medical meetings; allowing more than two agents to take part in an argument inquiry dialogue.
- Consider the implications of allowing agents’ belief bases to change during a dialogue, since it is likely that an agent may be carrying out several tasks at once and may even be involved in several different dialogues at once, and as a result it may be regularly updating its beliefs… If an agent’s belief base kept growing during a dialogue, would it be possible to generate infinite dialogues? What should an agent do if it has cause to remove a belief from its belief base that it asserted earlier in the dialogue?...
- Further explore the benchmark which we compare our dialogue outcomes to…
- Further investigate the utility of argument inquiry dialogues when embedded in dialogues of different types…

Work to do
As stated previously, adapt the work of Black & Hunter to work for assumption-based argumentation (instead of defeasible logic programming) and dialogues constraints (instead of a strategy function), yet using the same protocol and outcome functions whilst relaxing some of the above-mentioned assumptions.

Tuesday, 13 February 2007

7, DeLP an Argumentative Approach

Notes take from ‘Defeasible Logic Programming An Argumentative Approach’ (2004), by Alejandro J. Garcia and Guillermo R. Simari

“… The defeasible argumentation basis of DeLP allows building applications that deal with incomplete and contradictory information in dynamic domains, where information may change. Thus, DeLP can be used for representing agent’s knowledge and for providing an inference engine…”

2, the Language

2.1 Fact: a literal, i.e. a ground atom, or a negated ground atom.
2.2 Strict Rule: an ordered pair, denoted “Head <- Body”.
2.3 Defeasible Rule: an ordered pair, denoted “Head -< Body”.

2.4 Defeasible Logic Program: a possibly infinite set of facts, strict rules and defeasible rules. In a program P, denoted as (H, A), we distinguish the subset H of facts and rules, and the subset A of defeasible rules.

2.5 Defeasible Derivation (monotonic)…
2.6 Strict Derivation: all the rules used in the defeasible derivation are strict rules.
2.7 A set of rules is contradictory iff there exists a defeasible derivation for a pair of complementary literals from the set.

3, Defeasible Argumentation

3.1 Argument Structure (non-monotonic): Denoted as [A, h]… or simply an argument A for h, is a minimal non-contradictory set of defeasible rules, obtained from a defeasible derivation for a given literal h… Note that strict rules are not part of an argument structure.
3.2 [B, q] is a sub-argument structure of [A, h] if B is a subset of A.

3.3 Two literals h and h1 disagree iff the set ‘H U {h, h1}’ is contradictory, where H is the set of facts and rules of the program.
3.4 We say that [A1, h1] counter-argues, rebuts, or attacks [A2, h2] at literal h iff there exists a sub-argument [A, h] of [A2, h2] such that h and h1 disagree.

3.5 (Generalised) Specificity: Criterion which allows discriminating between two conflicting arguments. Intuitively, this notion of specificity favours two aspects in an argument: it prefers an argument (1) with greater information content (and thus more precise) or (2) with less use of rules (more direct and thus more concise).

3.6 Equi-Specificity: Two arguments [A1, h1] and [A2, h2] are equi-specific iff A1 = A2, and the literal h2 has a strict derivation from ‘H U {h1}’, and the literal h1 has a strict derivation from ‘H U {h2}’.

3.7 Argument Comparison Using Rule’s Priorities: The argument [A1, h1] will be preferred (denoted “>”) over [A2, h2] iff:
1. there exists at least one rule ra (from A1) and one rule rb (from A2) such that ra > rb.
2. and there is no rb’ (from A2) and ra’ (from A1) such that rb’ > ra’.

4, Defeaters and Argumentation Lines

4.1 [A1, h1] is a proper defeater for [A2, h2] at literal h iff there exists a sub-argument [A, h] of [A2, h2] such that [A1, h1] counter-argues [A2, h2] at h, and [A1, h1] is strictly more specific than [A, h].
4.2 [A1, h1] is a blocking defeater for [A2, h2] at literal h iff there exists a sub-argument [A, h] of [A2, h2] such that [A1, h1] counter-argues [A2, h2] at h, and [A1, h1] is unrelated by the preference order to [A, h], i.e., neither argument structure is more specific than the other.
4.3 [A1, h1] is a defeater for [A2, h2] iff it is either a proper defeater or a blocking defeater.

4.4 Argumentation Line (for [A0, h0]): A sequence of argument structures from P, denoted [[A0, h0], [A1, h1], [A2, h2] …], where each element of the sequence [Ai, hi], i > 0, is a defeater of its predecessor [Ai-1, hi-1].
4.5 Supporting and Interfering argument structures: Let [[A0, h0], [A1, h1], [A2, h2] …] be an argumentation line, we define the set of supporting argument structures {[A0, h0], [A2, h2], [A4, h4] …} and the set of interfering argument structures {[A1, h1], [A3, h3], [A5, h5] …}.

4.6 A set of arguments {[Ai, hi]} (for i = 1 to n) is concordant iff the set ‘H U A1 U A2 U … U An’ is non-contradictory.

4.7 An argumentation line is acceptable iff:
1. It is a finite sequence.
2. The set of supporting arguments is concordant, and the set of interfering arguments is concordant.
3. No argument in the argumentation line is a sub-argument of an argument appearing earlier.
4. For all i, such that the argument [Ai, hi] is a blocking defeater for [Ai-1, hi-1], if [Ai+1, hi+1] exists, then [Ai+1, hi+1] is a proper defeater for [Ai, hi].

It is interesting to note that changes in the definition of acceptable argumentation line may produce a different behaviour of the formalism. Thus, the definition could be used as a way of tuning the system to obtain different results.

5, Warrant through Dialectical Analysis

In DeLP a literal h will be warranted if there exists a non-defeated argument structure [A, h]. In order to establish whether [A, h] is non-deafeated, the set of defeaters for A will be considered. Since each defeater D for A is itself an argument structure, defeaters for D will in turn be considered, and so on. Therefore, more than one argumentation line could arise, leading to a tree structure.

5.1 Dialectical Tree… Every node (except the root) represents a defeater (proper or blocking) of its parent, and leaves correspond to non-defeated arguments. Each path from the root to a leaf corresponds to one different acceptable argumentation line.

Marking of a dialectical tree (a bottom-up process through which we are able to determine the marking of the root):
(1) All leaves in the tree are marked as “U”.
(2) An inner node will be marked as “U” iff every child of it is marked as “D”. Otherwise it will be marked as “D”, i.e. iff it has at least one child marked as “U”.

5.2 Warranted Literals: Let [A, h] be an argument structure and T* its associated marked dialectical tree. The literal h is warranted iff the root of T* is marked as “U”. We will say that A is a warrant for h.

5.3 Answer to Queries: The answers of a DeLP interpreter can be defined in terms of a modal operator B. In terms of B, there are four possible answers for a query h:
- YES, if Bh (h is warranted)
- NO, if B~h (the compliment of h is warranted)
- UNDECIDED, if Bh and B~h (neither h nor ~h are warranted)
- UNKNOWN, if h is not in the language of the program.

The Warrant Procedure with pruning

6, DeLP Extensions

DeLP with Default Negation… In DeLP “absence of sufficient evidence” means “there is no warrant”. Therefore, the default negation ‘not F’ will be assumed when the literal F is not warranted… Default negation will be allowed only preceding literals in the body of defeasible rules, e.g., ‘~cross_railway_tracks -< not ~train_is_coming’…

Extended Defeasible Rules: defeasible rules that use default negation.

… The reason not allowing default negotiation in strict rules is twofold. On one hand, a strict rule ‘p <- not q’ is not completely strict, because the head ‘p’ will be derived assuming ‘not q’. On the other hand, the set of strict rules and facts could become a contradictory set in many cases…

Extended Defeasible Logic Program: A set of Facts, Strict Rules and Extended Defeasible Rules.

6.1 Extended Defeasible Derivation: Since the decision of assuming an extended literal ‘not L’ will be carried out by the dialectical process, the definition of defeasible derivation is modified accordingly in extended DeLP. The change reflects that when an extended literal is found in the body of a rule, the literal will be ignored…

6.2 Extended Argument Structure: The definition of argument structure is also extended in order to avoid the introduction of self-defeating arguments… The definition is as before but with an addition rule:
- if L is a literal in the defeasible derivation (from the union of the supporting argument, and set of facts and rules) of h, then there is no defeasible rule in the argument containing ‘not L’ in its body.

6.3 In extended DeLP, default negated literals (assumptions on which the derivation is based) will be another point of attack in an argument… An argument structure [A1, h1] is a defeater for [A2, h2] iff it is a proper or blocking defeater for [A2, h2], or an attack to an assumption of [A2, h2].

DeLP with presumptions (a defeasible rule with an empty body, e.g. ‘a -<’)…

7, Implementation and Application (visit http://cs.uns.edu.ar/~ajg/DeLP.html)

Thursday, 8 February 2007

Argument-Based Decision Making using Defeasible Logic Programming

Thoughts following on from Carlos Ivan Chesnavar’s talk at Imperial entitled ‘Argument-Based Decision Making using Defeasible Logic (DeL) Programming’.

In what follows, when referring to DeL programming treat '<-' (or '->') as implication for a strict rule and '<' (or '>') as implication for a defeasible rule. Treat '/' as inference, '~' as negation (propositional or negation by failure) and '/' as the contrary symbol.

Another point of note, a deductive system is a tuple where L is the language (the set of allowed "expressions": a, b, a^b, a^(a^b), etc.) and R is the set of rules of the form ( a / b) including conjunction ( (a, b) / (a ^ b) ), modus ponens ( (a, (a -> b)) / b ) etc.

For defeasible rules (like ‘a < b, ~c’), is it correct that the conclusion of the rule is defeasible and not the rule itself? Is there any work on defeating the existence of the rules (strict or defeasible) themselves?

Yes and don’t know respectively.

As for the former question, suppose we have a DeL program consisting of the following rules:
(a < b, ~c), (b <-), (~c <-), (~a <-)
Then, although we can build an argument {b, ~c} for the conclusion (a), this is built from a defeasible rule and is thus “defeated” by the strict rule (~a <-).

As for the latter question, it doesn’t really make sense for defeasible rules since the rules are by nature defeasible. A defeasible rule can be used as long as it does not contradict the conclusions of any strict rules.

Is this work of DeLP orthogonal to or compatible with Assumption-Based (AB) argumentation?

There is a “mapping” between the two but then there are differences.

As for the mapping, take as an example a DeL program as follows:
(p > q), (> p), (r > ~q), (> r).
Using this program we can define arguments for (q) and (~q) as follows:
{(> p), (p > q)} and {(> r), (r > ~q)} respectively.

The above DeL program can be mapped to an AB deductive system consisting of the following inference rules:
( (p, alpha) / q ), ( beta / p ), ( (r, gamma) / ~q ), ( delta / r )
Where alpha, beta, gamma and delta are the possible assumptions.

Next, contrary relations have to be defined between the assumption and non-assumption predicates to complete the mapping between the DeL program and the AB system, as follows:
(alpha = ~q), (beta = ~p), (gamma = q), (delta = ~r).

Using these set of rules we can define arguments for (q) and (~q) as follows:
{alpha, beta} and {gamma, delta} respectively.
Note that the arguments “undercut” each other since the conclusion ~q attacks the support for the conclusion q (i.e. alpha) and the conclusion q attacks the support for the conclusion ~q (i.e. gamma).

As a further illustration of the mapping between DeL programs and AB deductive systems, if the defeasible rule (> p) in the DeL program was a strict rule (-> p) instead, all that would be required would be to replace the rule (beta / p) in the AB system with the rule ( / p).

One difference between the two is that DeL programming depends upon arguments being “minimal” whereas the work of AB argumentation does not. As an illustration of non-minimal arguments for each of the approaches, consider the example above:
- {(> p), (> r), (p > q)} is a non-minimal argument for (q) in the DeL program since (> r) serves no purpose in supporting the conclusion (q).
- {alpha, beta, gamma} is a non-minimal argument for (q) in the AB system since gamma serves no purpose in supporting the conclusion (q).
Requiring arguments to be minimal is not desired at an implementation level since checking whether an argument is minimal or not can be computationally expensive.

Another difference is in the semantics; for DeL programming there is none.

He said, “the preferences are among argument structures and not rules”. What is meant by argument structure here?

Not sure.

How do the preferences work?

The preferences are drawn from “specificity”. As an example, consider a knowledge-base of defeasible rules as follows:
(a -> b), (a > k), (b > k), (a), (b).
Now, using this we can construct an argument for (k) using the rules (a > k) and (a), but we can also construct an argument for its contrary (k) using the rules (b > k) and (b). So which do we “prefer”? In this particular example we prefer (k) because of the rule (a -> b), which implies that (a), the basis of the conclusion (k), is more specific than (b), the basis of the conclusion (k). This can be seen as ancestral in the sense that eldest is best.

In a “blocking situation”, where two arguments are equally preferred, how is the conflict resolved? Especially given that he said “the system is sceptical from the beginning”. What semantical notion is used in their framework?

Not sure and none respectively; the framework currently has no semantics.

He said that the “Defeasible Logic Programming can be considered a deviation from Logic Programming” and that the “defeasible rules behave like assumptions in some sense, and don’t in some sense”. How are assumptions represented in their framework? Are they?

Yes, in the sense that a defeasible rule can be “assumed” to hold in the absence of strict rules alerting to the contrary.

Further Work
Read, understand and adapt the recent paper (2007) by Elizabeth Black and Anthony Hunter entitled ‘A Generative Inquiry Dialogue System’ to work for:
- Assumption-based argumentation instead of Defeasible Logic Programming.
- Dialogues constraints instead of a strategy function.
However, the same protocol should be used, and the resulting work should be able to prove the same results.

Tuesday, 6 February 2007

1.9, 1.10, Presentation: Written and Oral Argumentation

Notes taken from ‘Argumentation: Analysis, Evaluation, Presentation’, by Frans van Eemeren et al.

Written Argumentation
Reasonableness: Such an argument contains nothing that would, by definition, form an obstacle to the resolution of a difference of opinion… The argumentation must convince readers by removing their doubts or by responding to their criticisms.

Comprehensible: The various parts of the argumentative text should be put together coherently. The use of language (and the standpoint and arguments themselves) should be as clear and understandable as possible… This does not necessarily mean the standpoint and arguments should be formulated explicitly – that would be unnatural and irritating... A well-presented argument will have a good balance of explicit and implicit elements.

… A series of statements and claims can be livened up considerably by throwing in an occasional rhetorical question, an exclamation, or some expression of feeling…

Analytical Overview: Can be a useful tool when rewriting the text or even when writing the first draft. It brings together concisely all the information necessary for evaluating an argumentative text, that is, what is the difference of opinion to be resolved, what is the structure of the argumentation, and so on. It can be used to check whether the argumentation is sound (i.e. whether it can stand up to criticism)…

Oral Argumentation
Good preparation will enable you to be flexible in responding to the other party:
- Be well prepared…
- Anticipate what position the other party will probably take and what their background in the subject matter is…

… Sometimes, rather than waiting for your opponent, you can just as well present the objection yourself and counter it: “Of course I am aware that… but…”

Use of language: To prevent misunderstandings, both parties must express their intentions as clearly as possible and interpret the opponent’s statements as accurately as possible…

Precization: Considering various possible interpretations of a statement and then choosing one of them… To ensure that they are both talking about the same thing, the participants may assign definitions to the main terms relevant to the discussion…

… To ensure the discussion proceeds in an orderly manner, the participants need to observe a number of important rules, including the following:
1. Each point raised in the discussion must be relevant to the matter at hand at that moment…
2. It is best to avoid making too many points at once…
3. The function of each contribution must be clear…
4. Participants should not draw out the discussion by unnecessary repetition or by bringing up points that have already been dealt with.
5. The discussion must be brought to a clear conclusion…

… When defending one’s own standpoint it is advisable to give the strongest arguments either right at the beginning or at the end. What comes first will influence the reception of the rest, and what comes last will be remembered the best…

... The conclusion of a speech should plant the most important points firmly in the minds of the audience. No new points should be brought up at this time, nor should the complete argument be repeated. It is important that the conclusion be clear and attractive…

… Some tips for a good presentation are:
- Announce no more than what you are going to do…
- Avoid giving the impression that you are not well prepared or are indifferent to the subject…
- Keep the sentences short…
- Use the passive form sparingly…
- Illustrate abstract ideas or generalisations with concrete examples…
- Instead of ending your speech suddenly, make sure you have a clearly identifiable conclusion…

Monday, 5 February 2007

1.8, Evaluation: Fallacies (2)

Notes taken from ‘Argumentation: Analysis, Evaluation, Presentation’, by Frans van Eemeren et al.

1, Violations of the Starting Point Rule
Rule 6: No party may falsely present a premise as an accepted starting point, or deny a premise representing an accepted starting point.
… It makes no sense to have a discussion with someone who will not commit himself to any starting points (some minimum of facts, beliefs, norms, and value hierarchies)… Explicit agreements about common starting points are rare. Parties normally operate on the assumption that they share certain starting points…
… Sometimes a proposition is temporarily accepted as true in order to test its acceptability or even to demonstrate that it is unacceptable because it has untenable consequences…
The antagonist violates Rule 6 if he questions either a proposition that was agreed on as a common starting point or one that the protagonist, based on verifiable background information, may rightly assume the antagonist to be committed to…
The protagonist violates Rule 6 if he acts as though a certain proposition was accepted as a starting point when that is not the case…
Fallacy of asking many questions: For example, asking “Who have you quarraled with today?” instead of properly splitting the question in two: “Have you quarrelled with anyone today?” and “Who have you quarrelled with?”
Fallacy of circular reasoning (or begging the question or petition principii): In defending their standpoints the protagonist uses an argument that amounts to the same thing as the standpoint…

2, Violations of the Argument Scheme Rule
Rule 7: A standpoint may not be regarded as conclusively defended if the defence does not take place by means of an appropriate argument scheme that is correctly applied.
Some argument schemes are rarely acknowledged to be sound…
Populist fallacy (argumentum ad populum): A variation of argumentation based on a symptomatic relation… It is claimed the standpoint should be accepted because so many people agree with it…
Confusing facts with value judgments (argumentum ad consequentiam): Inappropriately appealing to a causal relation… In support of a standpoint with a factual proposition, an argument is advanced that is normative because it points out undesirable effects of the standpoint: “It is (not) true, because I (don’t) want it to be true.”
… If an argument scheme is correctly applied, then all critical questions corresponding to this scheme can be satisfactorily answered…
Fallacy of abuse of authority (argumentum ad verecundiam): A proposition is presented as acceptable because some person or written source that is inappropriately presented as an authority says that it is so…
Fallacy of hasty generalisation (secundum quid): Generalising on the evidence of too few observations…
Fallacy of false analogy: The two things compared must really be comparable and there must be no special circumstances that invalidate the comparison.
Fallacy of post hoc ergo propter hoc (“After this, therefore, because of this”): Sometimes a cause-and-effect relation is based on no more than the fact that the one thing preceded the other.
Fallacy of the slippery slope: The mistake here is to wrongly suggest that adopting a certain course of action will inevitably be going from bad to worse, when in fact there is no evidence that such an effect will occur…

3, Violations of the Validity Rule
Rule 8: The reasoning in the argumentation must be logically valid or must be capable of being made valid by making explicit one or more unexpressed premises.
Affirming the consequent and denying the antecedent: Invalid counterparts of the modus ponens and modus tollens types of reasoning. The mistake made in both of these forms of invalid reasoning is that a sufficient condition is treated as a necessary condition.
Fallacy of division: Assuming every property of the whole also applies to each of the component parts.
Fallacy of composition: Treating the whole as a simple sum of the separate parts. If a stew is composed of ingredients each of which by itself is delicious, this is no guarantee that the stew will also be delicious.

4, Violations of the Closure Rule
Rule 9: A failed defence of a standpoint must result in the protagonist retracting the standpoint, and a successful defence of a standpoint must result in the antagonist retracting his or her doubts.
Fallacy of refusing to retract a standpoint that has not been successfully defended: A protagonist who has not managed to successfully defend the standpoint must be prepared to give up this standpoint.
Fallacy of refusing to retract criticism of a standpoint that has been successfully defended: If the protagonist has succeeded, then the antagonist must be prepared to retract the criticism of the standpoint.
Fallacy of concluding that a standpoint is true because it has been defended successfully: When inflated consequences are attached to the successful attack or defence. Successful protagonists are entitled to expect the other party to retract their doubts about the standpoint, but no more than that… If protagonists conclude that they have now proved that their standpoint is true, then they are going too far. The only thing they have shown is that their standpoint, based on the agreed-on starting points, can be successfully defended…
Fallacy of concluding that a standpoint is true because the opposite has not been successfully defended (argumentum ad ignorantiam): The failure of a defence does not warrant the conclusion that the standpoint has been shown to be false or that the opposite standpoint is true… This ignores the possibility of a “middle course”…

5, Violations of the Usage Rule
Rule 10: Parties must not use any formulations that are insufficiently clear or confusingly ambiguous, and they must interpret the formulations of the other party as carefully and accurately as possible.
Fallacy of unclarity or fallacy of ambiguity: Any time a party makes use of unclear or ambiguous language to improve his or her own position in the discussion. These fallacies occur not only by themselves, but also – even often – in combination with violations of other discussion rules…
Structural unclarity at the textual level: Unclarity related to the structure of larger pieces of text, resulting from “illogical” order, lack of coherence, obscure structure, and so on…
Unclarity at sentence level: Four main types can be distinguished, and this is demonstrated by the statement “Charles is a kleptomaniac”:
(1) Implicitness – “Are you warning me or just informing me?” The listener is not sure what the communicative function of the speech act is because the context and situation allow for more than one interpretation.
(2) Indefiniteness – “Charles? Charles who?” Seeks clarification of the propositional content. The listener cannot determine who the speaker is referring to; the reference is unclear.
(3) Unfamiliarity – “A kleptomaniac? What’s that?” Also indicates unclarity in the propositional content, but this time it is the predication that is problematic…
(4) Vagueness – “What do you mean, he’s a kleptomaniac? Do you mean once upon a time he stole something, or do you mean he makes a habit of stealing things?” The listener attempts to obtain a clearer idea of what the speaker means by “kleptomaniac”, thereby reducing the vagueness of this term...
Ambiguity has to do with the fact that words and phrases (and questions) can have more than one meaning…

1.7, Evaluation: Fallacies (1)

Notes taken from ‘Argumentation: Analysis, Evaluation, Presentation’, by Frans van Eemeren et al.

Fallacies: Violations of the discussion rules. There are 10 rules that apply specifically to the argumentative discussions. The first 5 rules pertain to how parties should put forward their standpoints and arguments in order to work constructively toward a resolution of the difference of opinion… The other 5 rules pertain to the argumentation and the conclusion of the discussion…

1, Violations of the Freedom Rule
Rule 1: Parties must not prevent each other from putting forward standpoints or casting doubt on standpoints.
A difference of opinion can be satisfactorily resolved only if it is first brought to light… Restricting the other party’s freedom to act is an attempt to dismiss him as a serious party to the discussion…
Fallacy of the stick (argumentum ad baculum): Any threat that aims to restrict the other party from feely putting forward his standpoint or criticism.
Appeal to pity (argumentum ad misericordiam): To play on the other party’s emotions: “How can you have given me a failing mark for my thesis? I’ve worked on it night and day.”
Personal attack (argumentum ad hominem): Being directed not at the intrinsic merits of someone’s standpoint or doubt, but at the person itself…
Direct Personal Attack (abusive variant): What is being kicked is the person rather than the ball. The impression is given that someone stupid or evil could not possibly have a correct standpoint or a reasonable doubt…
Indirect Personal Attack (circumstantial variant): Suspicion is cast on the other party’s motives, for example by suggesting that the party has a personal interest in the matter and is therefore biased…
You also variant (tu quoque): An attempt is made to undermine the other party’s credibility by pointing out a contradiction in that party’s words or deeds… However, being inconsistent does not automatically mean that their standpoint is wrong (unless someone puts forward contradictory standpoints or arguments in the course of the discussion)…

2, Violations of the Burden-of-Proof Rule
Rule 2: A party who puts forward a standpoint is obliged to defend it if asked to do so.
Protagonists can be released from the obligation to defend their standpoint if they have previously defended it successfully against the same antagonist and if nothing has changed… or if their opponents refuse to commit themselves to anything or are not prepared to follow the rules… If someone tries to get out of the obligation to defend a standpoint, the discussion will stagnate in the opening round, in which it is determined who is protagonist and who is antagonist.
Shifting the burden of proof: “You prove first that it isn’t so.” In a nonmixed difference of opinion, only one party puts forward a standpoint, so there is only one party who has anything to defend… In a mixed difference of opinion both parties have an obligation to defend their standpoint. The only decision to be made is what order they should present their defences…Perhaps the status quo be given status of presumption… or the standpoint that is easiest to defend be defended first (the principle of fairness)… But a mixed difference of opinion can never be completely resolved in an argumentative discussion until unless both of the parties meet the obligation to defend their standpoints.
Evading the burden of proof: Presenting the standpoint as something that needs no proof at all (“It is obvious that…”, “Nobody in their right mind would deny that…”) or giving a personal guarantee for the correctness of the standpoint (“I can assure you that…”) If this ploy works, antagonists may feel overwhelmed and fail to voice their doubts.
Hermetic formulations of standpoints: Formulating the standpoint in a way that amounts to making it immune to criticism because it cannot be tested or evaluated. “Women are by nature obsessive”, “Men are basically hunters”, “The Frenchman is essentially intolerant”. These standpoints refer to “men”, “women”, “the Frenchman”, avoiding quantifiers such as “all”, “some”, or “most”. How many examples or counterexamples are needed? Often, intangible (essentialistic) qualifications, such as “essentially”, “real”, “by nature”, are used as well… Any counterexample will be met by something like “that's not a ‘real’ woman acting according to her ‘true nature’.” All attempts at refutation thus bounce off armour of immunity.

3, Violations of the Standpoint Rule
Rule 3: A party’s attack on a standpoint must relate to the standpoint that has been advanced by the other party.
When the standpoint attacked is not the standpoint that was originally put forward by the protagonist, even if the disagreement seems to be resolved, it will be, at most, a spurious resolution. What the party seems to have successfully defended is not the same as what the other party has attacked.
The fallacy of the straw man: Parties misrepresent the opponent’s standpoint or attribute a fictitious standpoint to him or her. In both cases, they plan their attack by attributing to the opponent a standpoint that can be attacked more easily.
Attributing a fictitious standpoint to the other party:
- Emphatically putting forward the opposite standpoint. If someone says firmly, “I personally believe the defence of our democracy is of great importance”, she thereby suggests that her opponent thinks otherwise…
- Referring to a group to which the opponent belongs and linking that group with the fictitious standpoint: “She says that she thinks research is useful, but as a business person she naturally thinks it as a waste of money”…
- Using expressions such as “Nearly everyone thinks that…” and “Educators are of the opinion that…” It is not stated who actually holds the standpoint being attacked and there is no evidence that there really are people who adhere to the standpoint. Not only is the standpoint fictitious, but the opponent too.
Misrepresenting the opponent’s standpoint: Presenting it in a way that makes it more difficult to defend, or even untenable or ridiculous. This is often achieved by taking the standpoint out of context, by oversimplifying it, or by exaggerating it…
If the original formulation of the disputed standpoint can be consulted, it is possible to verify whether it has been represented accurately… Sometimes, the representation is so improbable that it is immediately suspect… In other cases, it helps to watch out for certain signals in the way the standpoint is represented (“Clearly the author is of the opinion that…”, “The author obviously assumes that…”)

4, Violations of the Relevance Rule
Rule 4: A party may defend his or her standpoint only by advancing argumentation related to that standpoint.
Accepting a standpoint on the basis of an irrelevant argument means the difference of opinion has not really been resolved.
Irrelevant argumentation: When the argumentation has no relation whatsoever to the standpoint that was advanced in the confrontation stage… The shift is intended to make the standpoint easier to defend…
Non-argumentation: When a standpoint is defended with means other than argumentation (for example, playing on the emotions, sentiments or biases of the intended audience), while at the same time the protagonist acts as though he or she were providing argumentation… Not usually for the purpose of convincing the other party, but of winning over a third party…
Pathetic fallacy (from the word “pathos”): Playing on the emotions of the audience…
Ethos: Speakers attempt to increase the audience’s faith in their expertise, credibility, or integrity, so that the audience will simply take their word for the standpoint’s acceptability…
Ethical fallacy or abuse of authority (argumentum ad verecundiam): When a person who claims to have expertise does not actually possess it or when the expertise is not relevant to the matter at hand…

5, Violations of the Unexpressed Premise Rule
Rule 5: A party may not falsely present something as a premise that has been left unexpressed by the other party or deny a premise that he or she has left implicit.
Magnifying what has been left unexpressed: Putting words in each other’s mouths. Exaggerating the unexpressed premise and thus making the standpoint easier to attack.
Denying an unexpressed premise: Protagonists refusing to accept commitment to an unexpressed premise implied by their own defence.

Thursday, 1 February 2007

6.4, Elements of ABN Agents

Notes take from Argumentation-Based Negotiation (ABN) (2003), by Iyad Rahwan et al.

… how agents are specified, and how they reason about the interaction.

Conceptual elements of a classical negotiating agent (at an abstract level):
- Locution interpretation component, which parses incoming messages.
- Proposal database, wherein proposals may be stored for future reference.
- Proposal evaluation and generation component, which ultimately makes a decision about whether to accept, reject or generate a counterproposal, or even terminate the negotiation.
- Locution generation component, which sends the response to the relevant party or parties.
- Knowledge base of its mental attitudes (such as beliefs, desires, preferences and so on), as well as models of the environment and the negotiation counterpart(s).
In contrast, more sophisticated meta-level information can be explicitly exchanged between the ABN agents (giving rise to the necessity of additional components)…

1, Argument and Proposal Evaluation
We find it useful to distinguish between two types of considerations in argument evaluation:
- Objective – for example, by investigating the correctness of its inference steps, or by examining the validity of its underlying assumptions…
- Subjective – an agent may choose to consider its own preferences and motivations in making that judgement, or those of the intended audience…

Theoretical reasoning: Two agents reasoning about what is true in the world. Here it makes sense to adopt an objective convention that is not influenced by their individual biases and motivations…

Practical reasoning: Two participants engaged in a dialogue for deciding what course of action to take, or what division of scarce resources to agree upon, or what goals to adopt. Here it would make more sense for them to consider their subjective, internal motivations and perceptions, as well as the objective truth about their environment.

Even objective facts may be perceived differently by different participants, and such differences in perception may play a crucial role in whether or not participants are able to reach agreement…

In summary, agents participating in negotiation are not concerned with establishing the truth per se, but rather with the satisfaction of their needs…

One approach to proposal and argument evaluation is to assume agents are benevolent, using the following simple normative rule: “If I do not need a resource, I should give it away when asked.”

There are two types of conflict that would cause an agent to reject a request:
- It has a conflicting intention. In argumentation terms, it refuses the proposal if it can build an argument that rebuts it.
- It rejects one of the elements of the argument supporting the intention that denotes the request. In argumentation terms, it refuses the proposal because it can build an argument that undercuts it.

In order for argumentation to work, agents must be able to compare arguments. That is needed, for example, in order to be able to reject “weak” arguments…

An alternative trend in proposal and argument evaluation is to explicitly take into account the utility of the agent. The basic idea is that the agent would calculate and compare the expected utility in the cases where it accepts and rejects a particular proposal… This can be taken further by factoring the trust the agent has in its counterpart when calculating the expected values… Another approach is to introduce authority

… The nature of argument evaluation depends largely on the object of negotiation and the way agents represent and update their internal mental states…

Challenges:
- Combining the objective (belief-based) and subjective (value-based) approaches to argument evaluation. For example, how can we combine the objective evaluation of the logical form of an argument with a subjective evaluation of its consequences based on utility, trust, authority, etc.?
- Providing unified argumentation frameworks that facilitate negotiation dialogues involving notions of goals, beliefs, plans, etc…
- … Understanding the space of possible influences ABN agents may (or must be able to) exert in the course of dialogue.

2, Argument and Proposal Generation
… This problem is concerned with generating candidate arguments to present to a dialogue counterpart…

In existing ABN frameworks, proposal generation is usually made as a result of some utility evaluation or planning process… Proposals may be accompanied by arguments, possibly generated using explicit (if-then) rules… or by providing “preconditions” for each argument to become a candidate argument… or, in a planning approach, generated in the process of proposal generation itself (stating the truth about its needs, plans, underlying assumptions, and so on, which ultimately caused the need to arise)…

Rahwan et al. [2] provide a characterisation of the types of arguments an agent can make in relation to the goal and belief structures of its counterpart…

Challenges:
- Providing a unified way of generating arguments by considering both objective and subjective criteria.
- A complete characterisation of the space of possible arguments (which in some frameworks could be infinite)…
- Understanding the influence of different factors, such as the interaction protocol, authority, expected utility, honesty etc. on argument generation. Specifically, how can authority be used in constructing an argument? Should an agent believe in an argument in order to present it? Can agents bluff? Etc.

3, Argument Selection
Given a number of candidate arguments an agent may utter to its counterpart, which one is the “best” argument from the point of view of the speaker?... Note that an agent need not generate all possible arguments before it makes a selection of the most suitable one…

The problem of argument selection can be considered the essence of strategy in ABN dialogues in general… However, there is very little existing work on strategies in multi-agent dialogues… Strategies depend on various factors such as the agents’ goals, the interaction protocol, the agents’ capabilities, the resources available to participants, and so on… Suitable argument selection in a negotiation context must take into account information about the negotiation counterpart…

6.3, External Elements of ABN Frameworks

Notes take from Argumentation-Based Negotiation (ABN) (2003), by Iyad Rahwan et al.

1, Communication Language & Domain Language
A negotiation framework requires a language that facilitates such communication. Elements of the communication language are usually referred to as locutions or utterances or speech acts.

Basic (traditional) locutions include propose, accept and reject. ABN locutions would allow agents to pass meta-information either separately or in conjunction with other locutions.

In addition, agents often need a domain language for referring to concepts of the environment, the different agents, proposals, and so on.

In multi-agent systems, two major proposals for agent communication have been advanced:
- the Knowledge Query and Manipulation Language (KQML) [1];
- the Foundation for Intelligent Physical Agents’ Agent Communication Language (FIPA ACL) [2], for which the contents of the messages can be in any domain language.

FIPA ACL has been given semantics in the form of pre- and post-conditions of each locution… While it offers the benefits of being a more or less standard agent communication language, it fails to capture all utterances needed in a negotiation interaction… While such locutions may be constructed by injecting particular domain language statements within locutions similar to those of FIPA ACL, the semantics of these statements fall outside the boundaries of the communication language…

In negotiation, the domain language must, at least, be capable of expressing the object of negotiation… There is also a meta-language for explicitly expressing preferences…

Challenges (in the design of domain and communication languages for ABN):
- There is a need to provide rich communication languages with clear semantics [3]… There are oppurtunities for extending the model of [3] with a richer argumentation system.
- The building of common, standardised domain languages that agent designers can use in order to plug their agents into heterogeneous environments [4, 5]… There is a need for exploring the suitability of these domain languages for supporting ABN and understanding how arguments can be expressed and exchanged.

2, Negotiation Protocol
… a negotiation framework should also specify a negotiation protocol (a formal set of conventions governing the interaction among participants) in order to constrain the use of the language. This includes the interaction protocol as well as other rules of the dialogue, as follows:
- Interaction protocol: specifies, at each stage of the negotiation process, who is allowed to say what… It might be based solely on the last utterance made, or might depend on a more complex history of messages between agents.
- Rules for admission.
- Rules for participant withdrawal.
- Termination rules.
- Rules for proposal validity.
- Rules for outcome determination.
- Commitment rules.

In ABN, the negotiation protocol is usually more complex (larger number of locutions and rules) than those in non-ABN. This leads to computational complexity arising from processes such as checking the locutions for conformance with the protocol given the history of locutions.

… Interaction protocols can be either specified in an explicit accessible format (finite state machines, dialogue games [3]), or be only implicit and hardwired into the agents’ specification (specified using logical constraints expressed in the form of if-then rules).

Challenges:
- As faced in the design of argumentation protocols in general, there is a need for qualities such as fairness, clarity of the underlying argumentation theory, discouragement of disruption by participants, rule consistency, and so on.
- Termination… It is not clear whether results strongly dependent on particular underlying logical systems can be generalised to a variety of protocols without regard to the internal agent architecture.
- Guaranteed success (i.e. terminating with agreement)…
- Conformance checking (i.e. whether a particular utterance is acceptable, given the history and context of interaction)…
- The design of admission rules in negotiation protocols… To our knowledge, no ABN framework includes external rules that govern admission to the negotiation dialogue… More work needs to be done on investigating the effect of different admission rules on the outcome of negotiation…

3, Information Stores
In some ABN frameworks, there is no explicit centralised information store available. Instead, agents internally keep track of past utterances. However, in many negotiation frameworks there is a need to keep externally accessible information during interaction…

Commitment store: A type of information store used as a way of tracking the claims made by participants in dialogue games… Note that commitment stores should not be confused with the interaction history

The representation and manipulation of information stores is not a trivial task, and has significant effects on both the performance and outcomes of negotiation dialogues. In particular, information store manipulation rules have a direct effect on the types of utterances agents can make given their previous utterances, the properties of the dialogue, and the final outcome.

Some of the key questions that need to be addressed in an ABN framework are:
- Under what conditions should an agent be allowed to retract its commitments and how would this affect the properties of dialogues?
- Under what conditions should an agent be forced to retract its commitments to maintain consistency?
- Specific to negotiation dialogues, do commitments to providing, requesting and exchanging resources require different treatments in other types of dialogue, such as persuasion or information seeking?

6.2, Approaches To Automated Negotiation

Notes take from Argumentation-Based Negotiation (ABN) (2003), by Iyad Rahwan et al.

1, Game-theoretic Approaches to Negotiation
Game theory is a branch of economics that studies the strategic interactions between self-interested economic agents (and recently, self-interested computational agents).

In game-theoretic analysis, researchers usually attempt to determine the optimal strategy by analysing the interaction as a game between identical participants, seeking its equilibrium

However, classical game-theoretic approaches have some significant limitations from the computational perspective. Specifically, most these approaches assume that agents have unbounded computational resources and that the space of outcomes is completely known…

2, Heuristic-based Approaches to Negotiation
To address some of the aforementioned limitations of game-theoretic approaches, a number of heuristics have emerged. Heuristics are rules of thumb that produce good enough (rather than optimal) outcomes and are often produced in contexts with more relaxed assumptions about agents’ rationality and resources…

Disadvantages: Firstly, the models often lead to outcomes that sub-optimal because they adopt an approximate notion of rationality and because they do not examine the full space of possible outcomes. And secondly, it is very difficult to predict precisely how the system and the constituent agents will behave…

3, Argumentation-based Approaches to Negotiation
Limitations of conventional negotiation approaches:
- Agents are not allowed to exchange any additional information other than what is expressed in the proposal itself. This can be problematic, for example, in situations where agents have limited information about the environment, or where their rational choices depend on those of other agents.
- Agents’ utilities or preferences are usually assumed to be completely characterised prior to the interaction. Thus an agent is assumed to have a mechanism by which it can assess and compare any two proposals. This is not always the case…
- Agents’ preference over proposals are often assumed to be proper in the sense that they reflect the true benefit the agent receives from satisfying these preferences.
- Agents’ utilities or preferences are assumed to be fixed. One agent cannot directly influence another agent’s preference model, or any of its internal mental attitudes (e.g. beliefs, desires, goals, etc.) that generate its preference model…

Argumentation-based approaches to negotiation attempt to overcome the above limitations by allowing agents to exchange additional information, or to “argue” about their beliefs and other mental attitudes during the negotiation process.

In the context of negotiation, we view an argument as a piece of information that may allow an agent to (a) justify its negotiation stance; or (b) influence another agent’s negotiation stance.

Thus, in addition to accepting or rejecting a proposal, an agent can offer a critique of it. This can help make negotiations more efficient. By understanding why its counterpart cannot accept a particular deal, an agent may be in a better position to make an alternative offer that has a higher change of being acceptable…

Another type of information that can be exchanged is a justification of a proposal, stating why an agent made such a proposal or why the counterpart should accept it. This may make it possible to change the other agent’s region of acceptability or the nature of the negotiation space itself (by introducing new attributes/dimensions to the negotiation object)…

An agent might also make a threat or promise a reward in order to exert some pressure on its counterpart to accept a proposal…

4, Summary
There is no universal approach to automated negotiation that suits every problem domain. Rather, there is a set of approaches, each based on different assumptions about the environment and the agents involved in the interaction…

ABN frameworks are gaining increasing popularity for its potential ability to overcome the limitations of more conventional approaches to automated negotiation. However, such models are typically more complex than their game-theoretic and heuristic counterparts.

6.1, Argumentation-Based Negotiation (Intro)

Notes take from Argumentation-Based Negotiation (2003), by Iyad Rahwan et al.

“… the frameworks reviewed in this article represent different preliminary attempts at solving parts of the puzzle by:
(i) constructing generic models of ABN (Sierra et al., 1998);
(ii) constructing limited, yet implementable systems, and studying their applicability (Kraus et al. 1998, Sadri et al. 2001);
(iii) studying the applicability of particular logic-based argumentation frameworks to ABN (Parson et al. 1998, Amgoud et al. 2000);
(iv) studying the properties of different decision making components and concepts such as trust in controlled settings (Ramchurn et al 2003);
(v) studying the different types of influences that can be attempted by participants in an ABN dialogue (Rahwan et al. 2003).”

“In this article we aim at setting up a research agenda for argumentation-based negotiation (ABN) in multi-agent systems. We do so by achieving the following…”
- Identifying the main research motivations and ambitions behind work in the field, and setting up a research agenda for ABN in multi-agent systems.
- Discussing the characteristics of traditional approaches and demonstrating why they fail in particular circumstances due to their underlying assumptions.
- Identifying the main features of ABN approaches, the main components of an abstract framework for ABN, and discussing the different attempts to realise these components.
- Providing a conceptual framework through which we outline the core elements and features required by agents engaged in ABN, as well as the environment that hosts these agents.
- Discussing, in detail, the essential elements of ABN frameworks and the agents that operate within these frameworks. In particular, constructing a conceptual model of ABN, involving external elements (namely, the communication and domain languages, the negotiation protocol, and the information stores) and agent-internal elements (namely, the ability to evaluate, generate, and select proposals and arguments).
- Surveying, evaluating and presenting existing work and proposed techniques (for each of the required elements) in the literature.
- Identifying and highlighting the major challenges encountered in the field, and opportunities that remain (and need to be) addressed if ABN research is to reach its full potential.

The article is structured as follows:
- Reviewing the different approaches to automated negotiation and outlining the contexts in which we believe argumentation-based approaches would be most useful (Section 2).
- Describing, in detail, the elements of an argumentation-based framework that are external to the agents, namely the communication and domain languages, the negotiation protocol, and various information stores (Section 3).
- Discussing the various internal elements and functionalities necessary to enable an agent to conduct ABN. More precisely, the process of argument and proposal evaluation, argument and proposal generation, and argument selection (Section 4).
- Summarising the landscape of existing frameworks (Section 5).
- Stating conclusions and summarising the major challenges (Section 6).

1, Introduction
An agent is viewed as an encapsulated computer system that is situated in an environment and is capable of flexible, autonomous action in order to meet its design objectives…

Negotiation is a form of interaction in which a group of agents, with conflicting interests and a desire to cooperate, try to come to a mutually acceptable agreement on the division on scarce resources.

Resources (taken in the broadest possible sense) can be commodities, services, time, money, etc. In short, anything that is needed to achieve something.

Argumentation-based approaches allow for more sophisticated forms of interaction than their game-theoretic and heuristic counterparts. This raises a number of research challenges related to both the design of the interaction environment as well as the agents participating in that interaction.