Saturday, 31 March 2007

15.5-15.7, A Generative Inquiry Dialogue System

Notes taken from ‘A Generative Inquiry Dialogue System’ (2007), by Elizabeth Black and Anthony Hunter

5, Soundness and Completeness

5.1, The argument inquiry outcome of a dialogue is a function… if D is a well-formed argument inquiry dialogue with participants x1 and x2, then… (given D the outcome is the set of all arguments that can be constructed from the union of the commitment stores and whose claims are in the question store).

… The benchmark that we compare the outcome of the dialogue with is the set of arguments that can be constructed from the union of the two agents’ beliefs. So this benchmark is, in a sense, the ‘ideal situation’ where there are clearly no constraints on the sharing of beliefs.

5.2, Let D be a well-formed argument inquiry dialogue with participants x1 and x2. We say that D is sound if and only if… (when the outcome of the dialogue includes an argument, then that same argument can be constructed from the union of the two participating agents’ beliefs).

(Theorem 5.1) If D is a well-formed argument inquiry dialogue with participants x1 and x2, then D is sound. Proof: …

5.3, Let D be a well-formed argument inquiry dialogue with participants x1 and x2. We say that D is complete iff… (if the dialogue terminates at t and it is possible to construct an argument for a literal in the question store from the union of the two participating agents’ beliefs, then that argument will be in the outcome of the dialogue at t.)

(Theorem 5.2) If D is a well-formed argument inquiry dialogue with participants x1 and x2, then D is complete. Proof: …

6, Future work…

… We would like to allow more than two agents to take part in an argument inquiry dialogue…

We currently assume that an agent’s belief base does not change during a dialogue, and would like to consider the implication of dropping this assumption…

We would also like to further explore the benchmark which we compare our dialogue outcomes to…

… We would like to further investigate the utility of argument inquiry dialogues when embedded in dialogues of different types.

7, Conclusions…

15.4, A Generative Inquiry Dialogue System

Notes taken from ‘A Generative Inquiry Dialogue System’ (2007), by Elizabeth Black and Anthony Hunter

4, Dialogue System

The communicative acts in a dialogue are called moves. We assume that there are always exactly two agents (participants) taking part in a dialogue… Each participant takes its turn to make a move to the other participant…

A move in our system is of the form (Agent, Act, Content)… e.g. (x, ‘open’, belief), (x, ‘assert’, (support, conclusion)), (x, ‘close’, belief)…

4.1, A dialogue… is a sequence of moves of the form… (The first move of a dialogue… must always be an open move…, every move of the dialogue must be made to a participant of the dialogue…, and the agents take it in turns to receive moves…)

4.2, (Terminology that allows us to talk about the relationship between two dialogues: ‘Sub-dialogue of’…, ‘top-level dialogue’…, ‘top-dialogue of’…, and ‘extends’…)

4.3, Let D be a dialogue with participants x1 and x2 such that topic(D) = gamma. We say that m(s)… is a matched-closed for D iff m(s-1) = (P, close, gamma) and m(s) = (~P, close, gamma).

So a matched-close will terminate a dialogue D but only if D has not already terminated and any sub-dialogues that are embedded within D have already terminated.

4.4, Let D be a dialogue. D terminates at t iff the following conditions hold…

4.5, The current dialogue is… (the innermost dialogue that has not yet terminated).

We adopt the standard approach of associating a commitment store with each agent participating in a dialogue. A commitment store is a set of beliefs that the agent has asserted so far in the course of the dialogue. As commitment stores consist of things that the agent has already publicly declared, its contents are visible to the other agent participating in the dialogue. For this reason, when constructing an argument, an agent may make use of not only its own beliefs but also those from the other agent’s commitment store.

4.6, A commitment store associated with an agent… is a set of beliefs.

An agent’s commitment store grows monotonically over time. If an agent makes a move asserting an argument, every element of the support is added to the agent’s commitment store. This is the only time the commitment store is updated.

4.7, Commitment store update

… when an argument inquiry dialogue with topic ‘alpha1 ^ … ^ alphaN -> phi’ is opened, a question store associated with that dialogue is created whose contents are {alpha1, …, alphaN}. Throughout the dialogue the participant agents will both try to provide arguments for the literals in the question store. This may lead them to open further nested argument inquiry dialogues that have a topic a rule consequent is a literal in the question store.

4.8… a question store… is a finite set of literals such that…

A protocol is a function that returns the set of moves that are legal for an agent to make at a particular point in a particular type of dialogue…

4.9, The argument inquiry protocol is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns the set of legal moves that the agent can make)…

(An agent can only assert an argument or open a rule for a conclusion that is in the current Question Store, and such that the move has not been made by either agent at an earlier timepoint in the dialogue.)

Note that it is straightforward to check conformance with the protocol as the protocol only refers to public elements of the dialogue.

… a specific strategy function… allows an agent to select exactly one legal move to make at each timepoint in an argument inquiry dialogue. A strategy is personal to an agent and the move that it returns depends on the agent’s private beliefs. The argument inquiry strategy states that if there are any legal moves that assert an argument that can be constructed by the agent (by means of its belief base and the other agent’s commitment store) then a single one of those moves is selected…, else if there are any legal open moves with a defeasible rule as their content that is in the agent’s beliefs then a single one of these moves is selected… If there are no such moves then a close move is made.

4.10… The function pickO returns the chosen open move

4.11… The function pickA returns the chosen assert move

4.12, The argument inquiry strategy is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns exactly one of the legal moves)…

Note that a top-level argument inquiry dialogue will always have a defeasible fact as its topic, but each of its sub-dialogues will always have a defeasible rule as its topic.

4.13, D is a well-formed argument inquiry dialogue iff… (D does not continue after it has terminated and is generated by the argument inquiry strategy).

… Note that we assume some higher-level planning component that guides the agent when deciding whether to enter into a dialogue, who this dialogue should be with and on what topic, i.e. that makes the decision to make the move m1.

15.1-15.3, A Generative Inquiry Dialogue System

Notes taken from ‘A Generative Inquiry Dialogue System’ (2007), by Elizabeth Black and Anthony Hunter

“… We focus on inquiry dialogues that allow two agents to share knowledge in order to construct an argument for a specific claim…”

1, Introduction

Dialogue games are now a common approach to defining communicative agent behaviour, especially when this behaviour is argumentation-based… Dialogue games are normally made up of a set of communicative acts called moves, a set of rules that state which moves it is legal to make at any point in a dialogue (the protocol), a set of rules that define the effect of making a move, and a set of rules that determine when a dialogue terminates. Most of the work so far has looked at modelling different types of dialogue in the Walton and Krabbe typology… here we provide a generative system.

... A key contribution of this work is that we not only provide a protocol for modelling inquiry dialogues but we also provide a specific strategy to be followed, making this system sufficient to also generate inquiry dialogues… and this allows us to consider soundness and completeness properties of our system.

2, Motivation

Our work has been motivated by the medical domain. Argumentation allows us to deal with the incomplete, inconsistent and uncertain knowledge that is characteristic of medical knowledge. There are often many different healthcare professionals involved in the care of a patient, each of whom has a particular type of specialised knowledge and who must cooperate in order to provide the best possible care for the patient…

Inquiry dialogues are a type of knowledge that would be of particular use in the healthcare domain, where it is often the case that people have distinct types of knowledge and so need to interact with others in order to have all the information necessary to make a decision…

… We compare the outcome of our dialogues with the outcome that would be arrived at by a single agent that has at its beliefs the union of both the agents participating in the dialogues beliefs. This is, in some sense, the ideal situation, where there are no constraints on the sharing of beliefs.

3, Knowledge Representation and Arguments

We adapt Garcia and Simari’s Defeasible Logic Programming (DeLP)… for representing each agent’s beliefs…

The presentation in this section differs slightly from that in (Garcia and Simari’s DeLP)… as (they) assume a set of strict rules, which we assume to be empty, and they assume facts to be non-defeasible. We assume that all knowledge is defeasible due to the nature of medical knowledge, which is constantly expanding…

3.1, A defeasible rule is denoted 'alpha1 ^ … ^ alphaN -> alpha0' where alphai is a literal… A defeasible fact is denoted 'alpha' where alpha is a literal. A belief is either a defeasible rule of a defeasible fact.

3.2, A belief base associated with an agent x is a finite set…

3.3… A defeasible derivation of (a literal) ‘alpha’ from (a set of beliefs)… is a finite sequence alpha1, alpha2, …, alphaN of literals such that alphaN is alpha and each literal… is in the sequence because…

3.4, An argument constructed from a set of, possibly inconsistent, beliefs… is a minimally consistent set from which the claim can be defeasibly derived…

Friday, 30 March 2007

Why don't the agents just pool together all their beliefs?

Quote taken from the last paragraph of section 4 of 'A Generative Inquiry Dialogue System' by Elizabeth Black and Anthony Hunter

In our simple example, it seems that it would be more straightforward to pool both agents beliefs and apply a reasoning procedure to this set of beliefs. However, given a real-world scenario this would not necessarily be the case. When dealing with the medical domain we have to consider privacy issues that would restrict agents from simply pooling all beliefs. It is also sometimes the case that agents have vast belief bases and the communication cost involved in sharing all beliefs would be prohibitive.

Thursday, 29 March 2007

What is an Inquiry Dialogue?

Paragraph taken from the introduction of 'A Generative Inquiry Dialogue System' by Elizabeth Black and Anthony Hunter (2007)

In this paper we focus on inquiry dialogues. Walton and Krabbe define an inquiry dialogue as arising from an initial situation of "general ignorance" and as having the main goal to achieve the "growth of knowledge and agreement". Each indiviual participating in an inquiry dialogue has the goal to "find a 'proof' or destroy one" ('Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning', page 66)... we have defined two different types of inquiry dialogue, each of which we believe fits the general definition:

- warrant inquiry dialogue - the 'proof' takes the form of a dialectical tree (essentially a tree with an argument at each node, whose arcs represent the counter-argument relation and that has at its root an argument whose claim is the topic of the dialogue).

- argument inquiry dialogue - the 'proof' takes the form of an argument for the topic of the dialogue.

Argument inquiry dialogues are commonly embedded in warrant inquiry dialogues. In this paper, we will focus only on argument inquiry dialogues.

Wednesday, 28 March 2007

13.4-13.5, Computing ideal sceptical argumentation

Notes taken from ‘Computing ideal sceptical argumentation’ by P. M. Dung, Paolo Mancarella and F. Toni (2006)

4, Computing ideal beliefs for assumption-based argumentation

… Our proof procedure for computing ideal beliefs for assumption-based frameworks is defined in terms IB-dispute derivations, adapted from (a variant of) the dispute derivations of the paper ‘Dialectic proof procedures for assumption-based, admissible argumentation’ for computing admissible beliefs, that we refer to here as AB-dispute derivations

By virtue of relying upon potential attacks, dispute derivations thus actually construct concrete dispute trees, that may correspond to one, no or multiple dispute trees.

Note that all our dispute derivations will be of finite length. This is because our ultimate goal is to develop effective proof procedures that can be used to support practical applications.

Computing admissible beliefs

The efficient construction of admissible dispute trees for assumption-based argumentation frameworks can be obtained via AB-dispute derivations… These are variants of the dispute derivations of ‘Dialectic proof procedures for assumption-based, admissible argumentation’, that improve upon them by being “more complete”… GB-dispute derivations (which compute grounded beliefs) can be seen as an initial step between dispute trees and the actual, final form of AB-dispute derivations, in that they are correct, but highly incomplete and inefficient…

4.1, Let ABF be an assumption-based argumentation framework, and AF be the corresponding abstract framework. Given a dispute tree T for AF,
- for any opponent node labelled by an argument ‘b’ with child a proponent node labelled by an argument ‘a’ if ‘a’ attacks some assumption ‘alpha’ in the set supporting ‘b’ then ‘alpha’ is said to be the culprit in ‘b’;
- the set of all assumptions supporting the arguments in the defence set of T is referred to as the assumption-defence set of T.

(Theorem 4.1) … Given a dispute tree T for AF, T is admissible iff no culprit in the argument of an opponent node belongs to the assumption-defence set of T.

In GB-dispute derivations:
- the set of culprits Ci is used to filter potential defence arguments…
- the set of defence assumptions Ai is used to filter potential culprits…
so that the final assumption-defence set constructed by the derivation does not attack itself (theorem 4.1).

4.2… Given a selection function, a GB-dispute derivation of a defence set A for a sentence ‘alpha’ is a finite sequence of quadruples (P0, O0, A0, C0), …, (Pi, Oi, A, Ci), …, (Pn, On, An, Cn) where…

(Theorem 4.2) Given a GB-dispute derivation of a defence set A for a sentence ‘alpha’:
- A is admissible and it is contained in the grounded set of assumptions;
- There exists A’ (a subset of A) and an argument A’ |- alpha.

Note that GB-dispute derivations succeed in many cases where other procedures for computing grounded beliefs fail. However, GB-dispute derivations are “highly incomplete” for admissibility, in that they fail to compute admissible sets in many cases, corresponding to infinite dispute trees.

AB-dispute derivations incorporate a filtering by defence assumptions so that they can (finitely) compute infinite admissible dispute trees… Moreover, AB-dispute derivations incorporate a filtering by culprit assumptions so that they can be more efficient…

4.3… Given a selection function, a AB-dispute derivation of a defence set A for a sentence ‘alpha’ is a finite sequence of quadruples (P0, O0, A0, C0), …, (Pi, Oi, A, Ci), …, (Pn, On, An, Cn) where…

(Theorem 4.3) For every AB-dispute derivation of a defence set A for a sentence ‘alpha’, the defence set A is admissible, and there exists some A’ (a subset of A) that supports an argument for ‘alpha’.

4.4, An assumption-based framework AF is positively acyclic (or p-acyclic for short) if the dependency graph of AF+ is acyclic. (AF+ is the framework obtained by deleting all assumptions appearing in the premises of the inference rules of R.)

(Lemma 4.1) Given an assumption based framework, let an infinite partial deduction be an infinite sequence of steps defined as in Definition 2.4. Given a p-acyclic framework, there exists no infinite partial deduction.

Computing ideal beliefs

… Like AB-dispute derivations, IB-dispute derivations are sequences of tuples, but these tuples are of the form (Pi, Oi, Ai, Ci, Fi). Fi a novel component intuitively holding all potential attacks against the proponent: by virtue of theorem 3.3 IB-dispute derivations need to make sure that no admissible set of assumptions containing any attack in Fi exists. This check is ultimately performed by a new kind of (subsidiary) dispute derivations, that we call Fail dispute derivations

(Notation 4.2) Let S be a set of sentences in L. By Fail(S) we mean that there exists no admissible set E of assumptions such that E ||- S.

IB-dispute derivations are sequences of tuples of the form (Pi, Oi, Ai, Ci, Fi) where
- the new component Fi holds all multisets S for which we want to prove that Fail(S) (these are the potential attacks S dealt with in step 2 of AB-dispute derivation)
- Pi, Oi, Ai, Ci are as in AB-dispute derivations, except that sentences occurring in the multisets in Oi may be marked.

Intuitively, IB-dispute derivations compute an admissible support for the given sentence ‘alpha’ while trying to check that no admissible set attacks it. As soon as a (potential) attack is found, this is stored in the F component of the tuple to check that this fails to be become/admissible. Whenever a potential culprit is ignored in a potential attack, this is marked so that it will not be selected again. Selected elements in the potential attacks in the O component are chosen amongst the unmarked elements. Thus, we will impose that, given a multisets S in Oi, the selection function will only select unmarked sentences in Su. (Su is the set of unmarked sentences in S.)

4.5… Given a selection function, an IB-dispute derivation of an ideal support A for a sentence alpha is a finite sequences of tuples (P0, O0, A0, C0, F0), …, (Pi, Oi, A, Ci, Fi), …, (Pn, On, An, Cn, Fn) where…

(Theorem 4.5) If there exists an IB-dispute derivation for ‘alpha’, then ‘alpha’ is an ideal belief.

(Theorem 4.6) Given a p-acyclic framework with an underlying finite language, if ‘alpha’ is an ideal belief then there exists an IB-dispute derivation for ‘alpha’.

4.6… Given a selection function, a Fail-dispute derivation of a multiset of sentences S is a sequence D0, …, Dn such that each Di is a set of quadruples of the form (P, O, A, C) where…

(Theorem 4.7) If there exists a Fail-dispute derivation for a multiset of sentences S then Fail(S) holds.

(Corollary 4.2) If there exists an IB-dispute derivation for ‘alpha’ in which Fail-dispute derivations are used to check whether Fail holds, then ‘alpha’ is an ideal belief (i.e. the derivations are sound).

Fail-dispute derivations are complete if AB-dispute derivations are complete for the admissibility semantics. Thus:

(Theorem 4.8) For p-acyclic frameworks with a finite underlying language, if there exists no admissible argument supporting S then there is a Fail-dispute derivation for S.

(Corollary 4.3) For p-acyclic frameworks with a finite underlying language, IB-dispute derivations in which Fail-dispute derivations are used to check Fail, are complete.

5, Conclusion…

Making a set of inference rules "proper"

Given this set of inference rules:

p <- a, b
~a <- ~a
~b <-
s <- ~p

where the set of candidate assumptions is {a, b, ~p}

In trying to make this set of rules "proper" by taking into account the finer details, I came up with this:

p <- a, b
not_a <- not_a, c
not_b <- d
s <- e

where
the set of candidate assumptions is {a, b, c, d, e}
and the contrary relations are
~a = {not_a}, ~b = {not_b}, ~c = {a}, ~d = {b}, ~e = {p}

Is this correct? In particular, I'm not sure whether the rule 'not_a <- not_a, c'
captures the loop of '~a <- ~a', and whether the contrary of 'c' should be {a} or
something else.


You have introduced far too many elements in your translation, it would have been sufficient to say:

p <- a, b
~a <- ~a
~b <-
s <- ~p

where {a,b,~p} are assumptions with contraries ~a, ~b, p respectively.

Please do not forget that the underlying deductive system can be anything, and the only requirement on assumptions is that they do not occur as conclusions of rules, so the above is a perfectly legitimate assumption-based framework that makes our point.

If you would like to avoid the use of ~ (which is fine), you could have

p <- a, b
not_a <- not_a
not_b<-
s <- not_p

where {a,b,not_p} are assumptions with contraries not_a, not_b, p respectively.

You can see that the only difference with the above is purely syntactic.

Completeness of AB-dispute derivations

“… AB-dispute derivations (as found in the 'Computing Ideal Sceptical Argumentation' paper) are not complete in general…” Are AB-dispute derivations only complete under certain conditions? What conditions?

There musn't be any loops of non-assumption literals (i.e. the argumentation framework must be "positively acyclic" as defined in the 'Computing Ideal Sceptical Argumentation' paper). For example, given an inference rule 'p <- p', then the dispute derivation for '~p' would not "succeed".

Tuesday, 27 March 2007

"Soundness" and "Success"

“Note that step (2ia) (the ignoring of assumptions) (in the 'Computing Ideal Sceptical Argumentation' paper) is not needed to guarantee soundness, but it is helpful to guarantee success in finding GB-dispute derivations in many cases…” What is meant here by "success" and "soundness"? How do they differ?

"Success" means that the procedure completes (i.e. doesn't get stuck). "Soundness" means that if an assumption is found to be grounded according to the procedure then that assumption is in the grounded set of assumptions.

As an example (taken from 'Dialectic proof procedures for assumption-based, admissible argumentation', example 7.2), given a set of inference rules as:

¬s <- q
¬q <- r, t
¬t

where the set of assumptions is {q, r, t}, and given a left-most selection function, there exists a derivation for the sentence ¬s of defence set {q} only if one of the outputs of the selection function is ignored (i.e. 'r'). Thus, the ignoring of assumptions is required for the procedure to "succeed".

GB-dispute derivations

How do the GB-dispute derivations (for computing grounded beliefs) as found in the 'Computing Ideal Sceptical Argumentation' paper differ from the AB-dispute derivations (for computing admissible beliefs)?

They differ in that the AB-dispute derivations have two extra filtering checks. As a result, the GB-dispute derivations are "less complete" and less efficient.

The GB-dispute derivations do have "filtering of potential defence arguments by culprits" and "filtering of culprits by defence assumptions" so that the final assumption-defence set constructed by the derivation does not attack itself. But do not have "filtering of defence assumptions by defences" and "filtering of culprits by culprits" and thus are "highly incomplete" for admissibility and more inefficient.

In what sense do the GB-dispute derivations compute grounded beliefs?

The proof for this is long. In observation, because there is no "filtering of defence assumptions by defences" the procedure will rightly fail (loop) for all non-well-founded frameworks.

Monday, 26 March 2007

14, The Search

Quotes taken from ‘The Search: How Google and its rivals rewrote the rules of business and transformed our culture’ (2005) by John Battelle

Page 1 - The Database of Intentions
“The library of Alexandria was the first time humanity attempted to bring the sum total of human knowledge together in one place at one time. Our latest attempt? Google.” (Brewster Kahle, entrepreneur and founder, the Internet Archive)

Page 19 – Who, What, Where, Why, When, and How (Much)
“Judge of a man by his questions, rather than by his answers.” (Voltaire)

Page 262 – The Perfect Search (Search as the New Interface)
… A9 has broken search into its two most basic parts. Recovery is everywhere you’ve been before (and might want to go again); discovery is everything you may wish to find, but have yet to encounter. A9 attacks recovery through its search history feature and toolbar, which tracks every site you visit. The discovery feature finds sites you might be interested in on the basis of your clickstream and – here’s the neat part – the clickstream of others…

Page 263-266 – The Perfect Search (The Semantic Web?)
But perfect search will require more than ubiquity, clickstreams, and personalisation. The vast corpus of information now available to us is often meaningless unless it is somehow tagged – identified in such a way that search engines can best make sense of it and serve it up to us. Many in the search industry believe search will be revolutionised by what is called metadata. Clickstreams are a form of metadata – information about where you go and what you choose as you browse the Web. But to get to more perfect search, we need to create a more intelligent Web. That means tagging the relatively dumb Web pages that make up most of the Web as we know it today with some kind of code that declares, in a machine-readable universal lingo, what they are, what they are capable of doing, and how they might change over time.

This is the vision of the semantic Web… While it’s always dangerous to lean too heavily on metaphor, the basic idea is that with semantic tags, the Web becomes more like a structured database… making it far easier to find things. This in turn allows the rules of logic, or reason, into the equation.

This structure also makes it easier to do things, to execute complex tasks built upon finding things – scheduling a meeting, planning a trip, organising a wedding, you name it…

But a major hurdle to the rise of the semantic Web has been standards: who gets to say which tags are right for which pages? If there is a picture of a Cape Cod seashore on the Web, should it be tagged as “beach”, “shore”, “ocean”, or any number of other possible words? As Yahoo learned early in its directory days, the nearly limitless possibilities of the Web do not lend themselves to top-down, human-driven solutions.

Again, this is where the Force of the Many comes in. In late 2004 and throughout 2005, a new kind of tagging scheme arose, one based not on any strict, top-down hierarchy, but rather on a messy, bottom-up approach. Small start-up companies like Flickr, Technorati (a weblog search engine), and del.icio.us (a link sharing site) began giving their users the ability to tag anything they saw, and then to share those tags with others. By letting anyone tag anything, the theory goes, ultimately a kind of fuzzy relevance for any given item will emerge. The photo of the Cape Cod seascape, for example, will probably be tagged with all the possible descriptors. That way, no matter what phrase a person uses to search it, whether it’s “ocean photos” or “Cape Cod seascapes,” the photo will be found.

Early bloggers dubbed this approach folksonomies – folk + taxonomy – and the movement is gaining momentum…

Page 280 – Perfect Search (The search for perfection)
Whatever your first search moment was, there will be many, many more as the space evolves. Search is no longer a standalone application, a useful but impersonal tool for finding something on a new medium called the World Wide Web. Increasingly, search is our mechanism for how we understand ourselves, our world, and our place within it. It’s how we navigate the one infinite resource that drives human culture: knowledge. Perfect search – every single possible bit of information at our fingertips, perfectly contexualised, perfectly personalised – may never be realised. But the journey to find out if it is just might be is certainly going to be fun.

"Countable" versus "Finite"

I noticed use of the word "countable" in the description of the deductive system in the 'Computing ideal sceptical argumentation' paper:

"... countably many sentences..."
"... a countable set of inference rules..."

Is this different to the word "finite"?


Yes. The set of natural numbers is countable but infinite. The set of real numbers is not even countable.

Sunday, 25 March 2007

13.1-13.3, Computing ideal sceptical argumentation

Notes taken from ‘Computing ideal sceptical argumentation’ by P. M. Dung, Paolo Mancarella and F. Toni (2006)

1, Introduction

… The semantics of admissible arguments is credulous, in that it sanctions a set as acceptable if it can successfully dispute every argument against it, without disputing itself. However, there might be conflicting admissible sets. In some applications, it is more appropriate to adopt a sceptical semantics, whereby, for example, only beliefs sanctioned by all (maximally) admissible sets of assumptions are held. For example, in the legal domain, different members of a jury could hold different admissible sets of assumptions but a guilty verdict must be the result of sceptical reasoning. Also, in a multi-agent setting, agents may have competing plans for achieving goals (where a plan can be interpreted as an argument for the goal it allows to achieve), and, when negotiating resources, they may decide to give away a resource only if that resource is not needed to support any of their plans. Furthermore, in the same setting, agents may decide to request an “expensive” resource from another agent only if that resource is useful to render all plans for achieving its goal executable.

Several sceptical semantics have been proposed for argumentation frameworks, notably the grounded semantics and the semantics whereby beliefs held within all maximally admissible sets of arguments are drawn, referred to as the sceptically preferred semantics. The grounded semantics can be easily computed but is often overly sceptical. Procedures for the computation of the sceptically preferred semantic exist… However, to the best of our knowledge, no procedure exists for computing the sceptically preferred semantics for non-coherent cases.

The ideal semantics is sceptical, and has the advantage of being easily computable, by a modification of the machinery presented in ‘Dialectic proof procedures for assumption-based, admissible argumentation’, but without being overly sceptical…

2, Argumentation frameworks and semantics

2.1, An abstract argumentation framework is a pair (Arg, attacks) where Arg is a finite set, whose elements are referred to as arguments, and ‘attacks’ is a binary relation over Arg. Given sets X, Y of (Arg) arguments, X attacks Y iff there exists x (in X) and y (in Y) such that (x, y) is a member of ‘attacks’.

… Several notions of “acceptable” sets of arguments can be defined…

2.2, A set X of arguments is
- admissible iff X does not attack itself and X attacks every argument Y such that Y attacks X;
- preferred iff X is maximally admissible;
- complete iff X is admissible and X contains all arguments x such that X attacks all attacks against x;
- grounded iff X is minimally complete;
- ideal iff X is admissible and it is contained in every preferred set of arguments.

The maximal ideal set is less sceptical than the grounded set… It does not always coincide with the intersection of all preferred sets… It can be a proper subset of the intersection of all preferred sets.

(Lemma 2.1) Let X and Y be two ideal sets of arguments. Then ‘X U Y’ is ideal.

The following properties of the ideal semantics hold (Theorem 2.1):
(i) Every abstract argumentation framework admits a unique maximal ideal set of arguments.
(ii) The maximal ideal set of arguments is complete.
(iii) The maximal ideal set of arguments is a superset of the grounded set and is a subset of the intersection of all preferred sets.
(iv) If the intersection of all preferred sets is admissible, then it coincides with the maximal ideal set.

… Concretely, assumption-based argumentation frameworks are concrete instances of abstract argumentation frameworks where arguments in Arg are defined as deductions from assumptions in an underlying logic, viewed as a deductive system, and where attack is defined in terms of a notion of contrary.

2.3, A deductive system is a pair (L, R) where
- L is a formal language consisting of countably many sentences, and
- R is a countable set of inference rules of the form alpha1, …, alphaN / alpha.
alpha (a member of L) is called the conclusion of the inference rule, alpha1, …, alphaN (also members of L) are called the premises of the inference rule and n>=0.

If n=0, then the inference rule represents an axiom. A deductive system does not distinguish between domain-independent axioms/rules, which belong to the specification of the logic, and domain-dependent axioms/rules which represent a background theory.

2.4, Given a selection function f, a (backward) deduction of a conclusion alpha based on (or supported by) a set of premises P is a sequence of multi-sets S1, …, Sm, where S1 = {alpha}, Sm = {P}, and… Each Si is a step in the deduction.

Deductions are the basis for the construction of arguments in assumption-based argumentation frameworks, but to obtain an argument from a backward deduction we restrict the premises to ones that are acceptable as assumptions. Moreover, to specify when one argument attacks another, we need to determine when a sentence is the contrary of an assumption…

2.5, An assumption-based argumentation framework is a tuple (L, R, A, ~) where
- (L, R) is a deductive system.
- A (a subset of L, and not equal to {}) is the set of candidate assumptions.
- If alpha is in the set A, then there is no inference rule of the form ‘alpha <- alpha1, …, alphaN’ in R.
- ~ is a (total) mapping from A into L. ~alpha is the contrary of alpha.

2.6, An argument ‘A |- alpha’ is a deduction of alpha whose premises A are all assumptions (of the set of candidate assumptions).

Given an argument ‘a’ of the form ‘A |- alpha’ we say that a is based upon A.

The notation ‘A |- alpha’ encapsulates the essence of an argument, by focusing attention on its set of assumptions A and its conclusion alpha…

2.7 … the only way to attack an argument is to attack one of its assumptions…

… the abstract framework AF corresponding to an assumption-based argumentation framework ABF is constructed as follows…

… “rebuttal” attacks can be reduced to the “undermining” attacks of assumption-based argumentation frameworks.

2.8, Refined definitions where sets of arguments are replaced by sets of assumptions underlying arguments:
- A set of assumptions A attacks a set of assumptions B iff there exists an argument A’ |- alpha such that A’ is a subset of A and ~alpha is in B.
- A set of assumptions A is admissible iff A does not attack itself and A attacks every set of assumptions B that attacks A.
- A set of assumptions A is preferred iff it is maximally admissible.
- A set of assumptions A is complete iff it is admissible and contains all assumptions x such that A attacks all attacks against {x}.
- A set of assumptions A is grounded iff it is minimally complete.
- A set of assumptions A is ideal iff A is admissible and it is contained in every preferred set of assumptions.

2.9, Let (L, R, A, ~) be an assumption-based argumentation framework and let alpha be in L. Then alpha is an admissible/grounded/ideal belief iff there exists an argument A |- alpha such that A is a subset of an admissible/grounded/ideal set of assumptions.

(Theorem 2.2) Let ABF be an assumption-based argumentation framework, and AF the abstract argumentation framework equivalent to it.
(i) If a set of assumptions A is admissible/grounded/ideal wrt ABF, then the union of all arguments supported by any subset of A is admissible/grounded/ideal wrt AF.
(ii) The union of all sets of assumptions supporting the arguments in an admissible/grounded/ideal set of arguments wrt AF is admissible/grounded/ideal wrt ABF.

3, Computing ideal arguments for abstract argumentation

Ideal sets of arguments can be computed incrementally, in defence of a given, desired argument, by means of admissible dispute trees

In general, dispute trees can be seen as a way of generating a winning strategy for a proponent to win a dispute against an opponent. The proponent starts by putting forward an initial argument, and then the proponent and the opponent alternate in attacking each other’s previously presented arguments…

3.1, A dispute tree for an initial argument a is a (possibly) infinite tree T such that…

The set of all arguments belonging to the proponent nodes in T is called the defence set of T.

3.2, A dispute tree T is admissible iff no argument labels both a proponent and an opponent node.

(Theorem 3.1) Any dispute tree that has no infinitely long branches is an admissible dispute tree.

(Theorem 3.2) (i) If T is an admissible dispute tree for an argument a then the defence set of T is admissible. (ii) If a is an argument and a is in A where A is an admissible set of arguments then there exists an admissible dispute tree for a with defence set A’ such that A’ is a subset of A and A’ is admissible.

(Theorem 3.3) An admissible set of arguments S is ideal iff for each argument a attacking S there exists no admissible set of arguments containing a.

3.3, An abstract admissible dispute tree T is ideal iff for no opponent node O in T there exists an admissible tree with root O.

(Theorem 3.4) (i) If T is an ideal dispute tree for an argument a then the defence set of T is ideal. (ii) If ‘a’ is an argument and ‘a’ is in A where A is an ideal set of arguments then there exists an ideal dispute tree for ‘a’ with defence set A’ and A’ is a subset of A.

Friday, 23 March 2007

12, The Evolution of Cooperation

Notes taken from ‘The Evolution of Co-operation’, by Robert Axelrod (1984)

The Prisoner’s Dilemma – a two-player game of choice; the choice to cooperate or defect at each move, with individual player payoffs depending on what both players (independently) choose, as follows:

(R’s move, C’s move, R’s payoff, C’s payoff)
Cooperate, Cooperate, R=3, R=3 (reward for mutual cooperation)
Cooperate, Defect, S=0, T=5 (sucker’s payoff and temptation to defect)
Defect, Cooperate, T=5, S=0, (temptation to defect and sucker’s payoff)
Defect, Defect, P=1, P=1 (punishment for mutual defection)

Strategy (or decision rule): A specification of what to do in any situation that might arise.

TIT FOR TAT, the strategy of starting with co-operation, and thereafter doing what the other player did on the previous move.

w is the ‘weight’ (or importance) of the next move relative to the current move. It is a ‘discount parameter’ that represents the degree to which the payoff of each move is discounted relative to the previous move.

1 + w + (w^2) + (w^3)… The sum of this infinite series for any w greater than zero and less than one is simply 1/(1-w).

(Proposition 1) If the discount parameter, w, is sufficiently high, there is no best strategy independent of the strategy used by the other player.

A strategy is ‘collectively stable’ if no strategy can invade it.

(Proposition 2) TIT FOR TAT is collectively stable iff w is large enough. This critical value of w is a function of the four payoff parameters; T, R, P and S.

(Proposition 3) Any strategy which may be the first to cooperate can be collectively stable only when w is sufficiently large.

A ‘nice’ strategy is one, such as TIT FOR TAT, which will never be the first to defect.

(Proposition 4) For a ‘nice’ strategy to be collectively stable, it must be ‘provoked’ by the first defection of the other player.

(Proposition 5) ALL D (i.e. always defect) is always collectively stable.

A strategy is ‘maximally discriminating’ if it will eventually cooperate even if the other has never cooperated yet, and once it cooperates will never cooperate again with ALL D but will always cooperate with another player using the same strategy as it uses.

p is the proportion of interactions by someone using the new strategy with another individual using the new strategy.

(Proposition 6) The strategies which can invade ALL D in a cluster with the smallest value of p are those which are maximally discriminating, such as TIT FOR TAT.

(Proposition 7) If a nice strategy cannot be invaded by a single individual, it cannot be invaded by any cluster of individuals.

How to do well in a durable iterated Prisoner’s Dilemma:
1. Don’t be envious.
2. Don’t be the first to defect.
3. Reciprocate both cooperation and defection. (Extracting more than one defection for each defection of the other risks escalation. On the other hand, extracting less than one-for-one risks exploitation.)
4. Don’t be too clever.

How to promote cooperation:
1. Enlarge the shadow of the future.
2. Change the payoffs.
3. Teach people to care about each other.
4. Teach reciprocity.
5. Improve recognition abilities.

Four factors which can give rise to interesting types of social structure:
- Labels, a fixed characteristic of a player, such as sex or skin colour, which can be observed by the other player.
- Reputation, malleable and comes into being when another player has information about the strategy that the first one has employed with other players.
- Regulation, a relationship between a government and the governed… Gives rise to the problems of just how stringent the rules and enforcement procedures should be.
- Territoriality, occurs when players interact with their neighbours rather than with just anyone.

A new strategy is introduced into one of the neighbourhoods of a population where everyone else is using a native strategy. The new strategy territorially invades the neighbourhood if every location in the territory will eventually convert to the new strategy.

A native strategy is territorially stable if no strategy can territorially invade it.

(Proposition 8) If a rule is collectively stable, it is territorially stable.

TIT FOR TAT’s robust success is due to being nice, provocable, forgiving and clear:
- Nice – it is never the first to defect, preventing it from getting into unnecessary trouble.
- Retaliation – discourages the other side from persisting whenever defection is tried.
- Forgiveness – helps restore mutual cooperation.
- Clarity – makes its behavioural pattern easy to recognise, and once recognised, it is easy to perceive that the best way of dealing with TIT FOR TAT is to cooperate with it.

Negation in Computing Argumentation

I was just looking through your 1999 paper 'Computing Argumentation in Logic Programming' and noticed that there was no mention of classical negation (like '¬p') anywhere. Is there a reason for this? Would propositions of the form '¬p' and hypotheses of the form 'not ¬p' require alterations to the paper and its findings?

In our approach classical negation 'not p' is meant to be represented by means of additional atoms (like) not_p. This means that in some cases, if the programs are not well written, then an inconsistency (p and not_p) may hold in some extension. This can be avoided if assumptions are properly positionsed, e.g.
'p if q' and 'not p if r'
really are written as
'p if q, a' and 'not_p if r, b'
where a, b are assumptions and contrary(a)=not_p and contrary(b)=p.

So, basically, we can ignore classical negation, and the overall approach is still fine.

Just to check a few things:

- In the paper, 'not p' is negation as failure, right? Is it to be interpreted as "p cannot be shown to hold and so we assume it does not hold".

Correct.

- Are rules of the form 'not p' allowed as the conclusions of rules?

No, conclusions 'not p', where 'not' is negation as failure, are not allowed in rules.

- Strictly speaking, would a program of the form
"p if q, a
not_p if r, b
where a,b are assumptions and contrary(a)=not_p and contrary(b)=p"
be allowed given that assumptions must be of the form of negative literals (like 'not p')?


Assumptions are negation as failure literals in this case, sorry I have written them as 'a' and 'b'. You see, we have proven that naf literals are really assumptions in the AIJ97 paper, so I often simply write them as assumptions. In terms of naf, my example could become what you suggest below or alternatively
p if q, not not_p
not_p if r, not p
where not in the premises of the rules is negation as failure.

I suggest you read my paper Abstract Argumentation (AI and Law 1996 or 1997): you can find it on my web page.

So, instead, would we have to say something like:
"p if q, not a
not_p if r, not b
where not a, not b are assumptions and contrary(a)=not_p and contrary(b)=p"?
Is this a correct translation?


Yes, this would be fine too.

Thursday, 22 March 2007

10.2, Argumentation, N-Person Games and Stable Marriage Problem

Notes taken from ‘On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games’, by Phan Minh Dung (1995)

2.1, Argumentation in N-Persons Games

Stable extensions do not capture the intuitive semantics of every meaningful argumentation system… As preferred extensions exist for every argumentation framework, we can introduce the preferred solutions to n-person games by defining them as the preferred extensions of the corresponding argumentation system… The new solutions satisfy both conditions of a rational standard behaviour: freeness from inner contradiction and the ability to withstand any attack from outside. This is clearly a contribution to the theory of n-person games.

2.2, Argumentation and the Stable Marriage Problem

Let P be a knowledge base represented either as a logic program, or as a non-monotonic theory or as an argumentation framework. Then there is not necessarily a “bug” in P if P has no stable semantics (as demonstrated by the stable marriage love triangle example).

11.1-11.2, Computing Argumentation in Logic Programming

Notes taken from ‘Computing Argumentation in Logic Programming’, by Antonio C. Kakas and Francesca Toni (1999)

1, Introduction

In the argumentation-based approach to the semantics of LP, a logic program P is seen as a theory in a background monotonic logic (the logic of Horn clauses) that can be extended by suitable subsets, delta, of a given set of 'hypotheses' H consisting of all negative (variable-free) literals of the form 'not p' that can be constructed in the language of P. A set of hypotheses delta can be seen as an 'argument' supporting the consequences of the extended theory 'P U delta' derived in the underlying monotonic logic, where the (negative) hypotheses in delta are understood as (ground) atomic facts. In general, not all subsets of H are suitable to extend P. For example, a subset delta may be 'self-conflicting', in the sense that 'P U delta' entails an atom p, with 'not p' in delta. More interestingly, two different sets of hypotheses (or arguments) delta and delta' can be 'conflicting' with each other with only one of them forming an 'allowed extension' of P.

The various (logic programming) LP semantics can be understood via different criteria for selecting appropriate subsets of (hypotheses) H to provide allowed extensions of a logic program (P). These different criteria can all be defined in terms of a common notion of ‘attack’ between subsets of H... The different selection criteria are then based upon comparing candidate sets of hypotheses with attacking sets of hypotheses...

2, Argumentation semantics for logic programming

2.1, A set of hypotheses A attacks another set delta (on an hypothesis not p) iff ‘P U A’ |= p for some not p in delta.

2.3, A set of hypotheses delta is admissible iff for all sets of hypotheses A, if A attacks delta, then delta attacks ‘A – delta’.

Intuitively, delta is admissible iff delta defends itself against each attack A. Note that a set of hypotheses that attacks itself cannot be admissible, because there is no attack against the empty set.

… every admissible set of hypotheses is ‘contained’ in some preferred extension, and thus, in order to determine whether a given query holds with respect to the preferred extension and partial stable model semantics, it is sufficient to determine whether the query holds with respect to the semantics of admissible sets.

… a program may admit several admissible (and preferred) sets of hypotheses which may be incompatible with each other. Any proof theory for the admissibility/preferred extension semantics would therefore need to allow a non-deterministic choice among such sets.

… the construction of an admissible set delta can be done incrementally, starting from a given set of hypotheses delta0, by adding to delta0 suitable defences for it. The existence of several admissible (and preferred) sets of hypotheses reflects itself on the existence of several suitable defences for a given delta0 and imposes a non-deterministic choice among defences in the proof theory.

2.8, A set of hypotheses delta is weakly stable iff for all sets of hypotheses A, if A attacks delta, then ‘delta U A’ attacks ‘A – delta’.

Intuitively, the attack A against delta can be used in conjunction with delta to defend delta against A. Note that, similarly to the case of admissible sets, a set of hypotheses that attacks itself cannot be weakly stable.

Trivially, every admissible set of hypotheses is weakly stable, but not vice versa…

As for admissibility and preferred extension semantics, it can be proved that every weakly stable set of hypotheses is ‘contained’ in some stable theory…

2.9, A set of hypotheses delta is acceptable to another set of hypotheses delta0 iff for all sets of hypotheses A, if A attacks ‘delta – delta0’, then there exists a set of hypotheses D such that D attacks ‘A – (delta0 U delta)’ and D is acceptable to ‘A U delta0 U delta’.
A set of hypotheses is acceptable iff it is acceptable to the empty set of hypotheses.

The base case for the above recursive definition is given by
(BC1) a set of hypotheses delta is acceptable to another set delta0 if there exists no set of hypotheses A attacking (delta – delta0).
… (BC2) delta is acceptable to delta0 if delta is a subset of delta0.
Note that a set of hypotheses delta that attacks itself cannot be acceptable, because there is no attack against the empty set.

Weakly stable and admissible sets of hypotheses are acceptable.

2.13, A set of hypotheses delta is wf-acceptable iff for all sets of hypotheses A, if A attacks delta, then there exists a set of hypotheses D such that D attacks A and D is wf-acceptable.

The base case for wf-acceptability is given by:
(BC) delta is wf-acceptable if no set of hypotheses attacks delta.

(Lemma 2.15) A set of hypotheses delta is wf-acceptable iff each set of hypotheses delta’ (that is a subset of delta) is wf-acceptable.

Complete sets of hypotheses: Sets of hypotheses that are admissible and contain every hypothesis they ‘defend’.

2.16, A set of hypotheses delta is stable iff delta does not attack itself and delta attacks each hypothesis not in delta.

10.1, Acceptability of Arguments

Notes taken from ‘On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games’, by Phan Minh Dung (1995)

1.1, Argumentation Frameworks

1, An argumentation framework is a pair AF = (AR, attacks) where AR is a set of arguments, and attacks is a binary relation on AR.

2, A set of arguments is said to be conflict-free if there are no arguments A, B in S such that A attacks B.

3, (1) An argument A is acceptable wrt a set S of arguments iff for each argument B: if B attacks A then B is attacked by S. (2) A conflict-free set of arguments S is admissible iff each argument in S is acceptable wrt S.

The (credulous) semantics of an argumentation framework is defined by the notion of preferred extension.

4, A preferred extension of an argumentation framework AF is a maximal (wrt set inclusion) admissible set of AF.

(Corollary 2) Every argumentation framework possesses at least one preferred extension. Hence, preferred extension semantics is always defined for any argumentation framework.

5, A conflict-free set of arguments S is called a stable extension iff S attacks each argument which does not belong to S.

(Lemma 3) S is a stable extension iff S = { A | A is not attacked by S}

(Lemma 4) Every stable extension is a preferred extension, but not vice versa.

1.2, Fixpoint Semantics and Grounded (Sceptical) Semantics

6, The characteristic function, denoted by F, of an argumentation framework AF is defined as follows: F(S) = {A | A is acceptable wrt S}

(Lemma 5) A conflict-free set S of arguments is admissible iff S is a subset of F(S).

It is easy to see that if an argument A is acceptable wrt S then A is also acceptable wrt any superset of S. Thus, it follows immediately that (Lemma 6) F is monotonic (wrt set inclusion).

The sceptical semantic of argumentation frameworks is defined by the notion of grounded extension:

7, The grounded extension of an argumentation framework AF, denoted by GE, is the least fixed-point of F.

8, An admissible set S of arguments is called a complete extension iff each argument which is acceptable wrt S, belongs to S.

Intuitively, the notion of complete extensions captures the kind of confident rational agent who believes in everything he can defend.

(Lemma 7) A conflict-free set of arguments E is a complete extension iff E = F(E).

The relations between preferred extensions, grounded extensions and complete extensions is as follows (Theorem 2):
(1) Each preferred extensions is a complete extension, but not vice-versa.
(2) The grounded extension is the least (wrt set inclusion) complete extension.
(3) …

In general, the intersection of all preferred extensions does not coincide with the grounded extension.

9, An argumentation framework AF = (AR, attacks) is finitary iff for each argument A, there are only finitely many arguments in AR which attack A.

1.3, Sufficient Conditions for Coincidence between Different Semantics

10, An argumentation framework is well-founded iff there exists no infinite sequence A0, A1, …, An, … such that for each i, Ai+1 attacks Ai. (Note: This eliminates cycles.)

(Theorem 3) Every well-founded argumentation framework has exactly one complete extension which is grounded, preferred and stable.

11, (1) An argumentation framework AF is said to be coherent if each preferred extension of AF is stable. (2) We say that an argumentation framework AF is relatively grounded if its grounded extension coincides with the intersection of all preferred extensions.

It follows directly from the definition that there exists at least one stable extension in a coherent argumentation framework.

An argument B indirectly attacks A if there exists a finite sequence A0, …, A2n+1 such that (1) A = A0 and B = A2n+1, and (2) for each i, 0<=i<=2n, Ai+1 attacks Ai.

An argument B indirectly defends A if there exists a finite sequence A0, …, A2n such that (1) A = A0 and B = A2n, and (2) for each i, 0<=i<=2n, Ai+1 attacks Ai.

An argument B is said to be controversial wrt A if B indirectly attacks A and indirectly defends A.

An argument is controversial if it is controversial wrt some argument A.

1.12, (1) An argumentation framework is uncontroversial if none of its arguments is controversial. (2) An argumentation framework is limited controversial if there exists no infinite sequence of arguments A0, …, An, … such that Ai+1 is controversial wrt Ai.

(Theorem 4) (1) Every limited controversial argumentation framework is coherent. (2) Every uncontroversial argumentation framework is coherent and relatively grounded.

An argument A is said to be a threat to a set of arguments S if A attacks S and A is not attacked by S. A set of arguments D is called a defence of a set of arguments S if D attacks each threat to S.

(Lemma 9) Let AF be a limited controversial argumentation framework. Then there exists at least one nonempty complete extension E of AF.

(Lemma 10) Let AF be an uncontroversial argumentation framework, and A be an argument such that A is not attacked by the grounded extension GE of AF and A is not in GE. Then (1) There exists a complete extension E1 such that A is in E1, and (2) There exists a complete extension E2 such that E2 attacks A.

(Corollary 11) Every limited controversial argumentation framework possesses at least one stable extension.

Well-founded frameworks and complete extensions

Dung defines the well-founded frameworks in his Acceptability of Arguments paper as: "Definition 10. An argumentation framework is well-founded iff there exists no infinite sequence A0, A1, ..., An, ... such that for each i, Ai+1 attacks Ai."

So, does this framework satisfy the definition (where 'C -> B' means argument C attacks argument B):
C -> B
B -> A
D -> A
A -> E


Yes, the framework is well-founded.

If so, what would be its complete extension? I think it would be {C, D}. However, according to Theorem 3 "Every well-founded argumentation framework has exactly one complete extension which is grounded, preferred and stable". This extension {C, D} is grounded and preferred but not stable. Can you see where I've gone wrong in my interpretation?

The (grounded, preferred, stable) extension is {C,D,E}. Indeed, E is defended by D, so it has to be in the grounded extension.

If an argument F were added to the framework such that F attacks C (F -> C), then the (grounded, preferred, stable) extension would be {F, B, D, E}.

If, on the other hand, argument E was removed (and hence the attack 'A -> E') from the original framework above, then the (grounded, preferred, stable) extension would be {C, D}.

Lastly, if F and G were added to the original framework above such that 'F -> C' and 'G -> D', then the (grounded, preferred, stable) extension would be {F, B, G}.

Wednesday, 21 March 2007

Preference-based argumentation

There are many takes on preference-based argumentation with different ways of representing preferences, for example:
- Assigning values to arguments and allowing audiences to vary in their ranking of the values;
- Allowing attacks on attacks, e.g. "C attacks 'A attacks B'";
- Rule-based, e.g. "B > A if happy" and "A > B if sad".
- Fact-based, e.g. "B > A".

These preference-based frameworks differentiate between attack and defeat. For example, in the value-based approach “… if an argument attacks an argument whose value is preferred it can be accepted, and yet not defeat the argument it attacks… An argument A defeatsa an argument B for audience a iff both attacks(A, b) and not valpref(val(B), val(A)).”

There are other definitions of defeat. For example:
- (A defeats B) if (A attacks B, A is preferred to B and B is not preferred to A).
- (A defeats B) if (A attacks B and B does not have higher value than A).
- (A defeats B) if (A attacks B and there is no attack on A's attack).
Some element of non-determinism still seems to remain. For example, what of those attacks for which the condition of defeat is not met, what can be said of them?

Whether the original problem of a possible non-determinance between two arguments can be completely eradicated is an interesting question.

Logics for Representing Knowledge

What different logics are there for representing knowledge? As an example, if told that "every bird flies, but penguins don't", some possible logics for representing this are:

Default Logic
b(X): f(X)/ f(X)
p(X) -> b(X)
p(X): / ~f(X)

Logic Programming (with negation as failure)
f(X) <- b(X) & not abnormal(f(X))
abnormal(f(X)) <- p(X)
~f(X) <- p(X)

Autoepistemic Logic (where 'L' means 'believes')
b(X) & ~L(~f(X)) -> f(X)
p(X) -> ~f(X)
a -> La (given)
~a -> L~a (given)
L(a -> b) -> (La -> Lb) (given)

"Tricky" Argumentation Frameworks

Amongst the set of "tricky" frameworks are those with loops.

Odd-loop frameworks pose problems as follows:

- 1-loop, argument X is attacked by argument A which attacks itself (X <- A -> A). Does A's attack on X have any meaning, given that it attacks itself?

- 3-loop, X <- A -> B -> C -> A. What does A's attack on X mean, given that the argument (C) that defends it from B also attacks it?

- 5-loop, A -> B -> C -> D -> E -> A. Which argument(s) do we believe, if any?

Even-loop frameworks pose problems as follows:

- 2-loop, A -> B -> A. Can we believe one of A or B, or not?

- 4-loop, A -> B -> C -> D -> A. Which argument(s) do we believe, if any?

Tuesday, 20 March 2007

9.7-9.9, Persuasion in Practical Argument Using Value-based Argumentation Frameworks

Notes taken from ‘Persuasion in Practical Argument Using Value-based Argumentation Frameworks’ by Trevor J. M. Bench-Capon (2003)

7, An example moral debate…

8, Facts in moral debate

In the discussion thus far we have assumed that all arguments relate to some value. But sometimes we need to consider matters of fact as well as opinion grounded in values

My solution is to treat fact as if it were a value, but fact is always the value with the highest preference for all parties. Whether we prefer life to property (as in the example) is a matter of choice, but to deny facts is to depart from rational argument by resorting to wishful thinking… We will continue to refer to arguments in preferred extensions for all reasonable value orders (those which rate fact as the highest value) as objectively acceptable.

Introducing facts can bring with it uncertainty. For example, we may not know whether Carla has sufficient insulin. Thus argument (G) may be attacked by another factual argument (H) to the effect that Carla does not have ample insulin. Note (H) is itself attacked by (G)… This introduces a cycle which is single valued in that both arguments relate to fact… This means that we may get multiple preferred extensions, even if we have an ordering on values…

Now we can see that there are four possibilities for the status of an argument. Arguments may be objectively acceptable sceptically, if they appear in every preferred extension. They may be objectively acceptable credulously, if they appear in every preferred extension corresponding to some choice of facts… They may be subjectively acceptable sceptically if they appear in every preferred extension relating to some value order… Finally they may be subjectively acceptable credulously if they appear in some preferred extension.

For persuasion against this background of uncertainty, only arguments whose objective acceptance is sceptically acceptable can be made persuasive for a determined challenger. Otherwise some choice of facts and value preferences will allow him to resist the defence. While the challenger may resist persuasion by a choice of which of two uncertain alternatives to believe, the defender cannot make such a choice and hope to be persuasive. It is for this reason that the choice must be made in the problem description when setting up the… scenario… Alternatively it is necessary to resolve the factual disputes before attempting to persuade someone to accept the value-based arguments.

9, Summary…

9.5-9.6, Persuasion in Practical Argument Using Value-based Argumentation Frameworks

Notes taken from ‘Persuasion in Practical Argument Using Value-based Argumentation Frameworks’ by Trevor J. M. Bench-Capon (2003)

5, Value-based argumentation framework

5.1, A value-based argumentation framework (VAF) is a 5-tuple: VAF = (AR, attacks, V, val, P) where AR is a finite set of arguments, attacks is an irreflexive binary relation on AR, V is a non-empty set of values, val is a function which maps from elements of AR to elements of V and P is a set of possible audiences…

Audiences are individuated by their preferences between values… We can therefore see the elements of P as being names for the possible ordering on V.

5.2, An audience-specific value-based argumentation framework (AVAF) is a 5-tuple VAFa = (AR, attacks, V, val, Valprefa) where AR, attacks, V and val are as for VAF, a is an audience, and Valprefa is a preference relation (transitive, irreflexive and asymmetric)…

… (we) distinguish between one argument attacking another, and that attack succeeding, so that the attacked argument is defeated. We therefore define the notion of defeat for an audience:

5.3, An argument A defeatsa an argument B for audience a iff both attacks(A, b) and not valpref(val(B), val(A)).

… In practice we expect the number of values to be small relative to the number of arguments. Many disputes can be naturally modelled using only two values…

5.4-5.7, Modified definitions of acceptable arguments (acceptablea), conflict-free sets of arguments, admissible sets of arguments and preferred extensions (preferreda) for a particular audience in light of the definition of defeatsa.

Now for a given choice of value preferences valprefa we are able to construct an AF equivalent to the AVAF, by removing from attacks those attacks which fail because faced with a superior value…

6, Acceptance in value-based argument frameworks

We term arguments which are acceptable irrespective of choice of value preferences, i.e. acceptable to every audience, objectively acceptable. Arguments which are acceptable to some audiences are termed subjectively acceptable. Note also that sceptical acceptance in the framework considered as an AF is not only not sufficient for objective acceptance, but is also not necessary

Note that objective acceptance of an attacked argument requires that the number of values be smaller than the number of arguments: otherwise it is always possible to prefer the value of the attacker, and that value to any of its attackers. A VAF is most useful when the number of values is small, since a single choice of preference between values is then able to determine whether a number of attacks succeed or fail.

6.1, Objective acceptance. Given a VAF, (AR, attacks, V, val, P), an argument A (of AR) is objectively acceptable iff for p (of P), A is in every preferredp.

6.2, Subjective acceptance. Given a VAF, (AR, attacks, V, val, P), an argument A (of AR) is subjectively acceptable iff for p (of P), A is in some preferredp.

An argument which is neither objectively acceptable (such as one attacked by an objectively acceptable argument with the same value) is said to be indefensible.

… While there is a natural requirement for even cycles in a standard AF (e.g. a two-cycle is an obvious way to deal with uncertain and incomplete information), and Dung argues strongly that an interpretation of an AF with an odd cycle is plausible, we believe that they should be avoided in VAFs. An odd cycle means that nothing can be believed: it is akin to a paradox, and paradoxes are best avoided. Even cycles represent dilemmas, and require that a choice between alternatives be made. While such dilemmas have their place in cases of uncertainty, we believe that they should be resolved before practical arguments giving rise to them are advanced. The presence of a single-valued cycle in a VAF is a sure indication that the reasoning which gives rise to it is flawed.

6.3, An argument chain in a VAF, C is a set of n arguments {a1… aN} such that (i) all arguments have the same value; (ii) a1 has no attacker in C; (iii) for all ai in C if i>1, then ai is attacked and the sole attacker of ai is ai-1.

… if a1 is accepted, then every odd argument of C is also accepted and every even argument of C is defeated…

6.4, Every AVAF with no single-valued cycles has a unique, nonempty preferred extension.

6.5, A line of argument for an argument arg, l, comprises a set of n argument chains {C1… Cn}, each relating to distinct values, such that arg is the last argument of C1 and the last argument of Cn attacks the first argument of Cn-1, and the first argument of Cn has no attacker with a value not already present in l.

6.6, For a VAF containing no cycles relating to a single value, in which every argument as at most one attacker, the status of any argument is determined by the parity of the chains in its line of argument.
(a) An argument is objectively acceptable iff it is odd numbered in C1 and the line of argument contains no subsequent odd chain.
(b) An argument is indefensible iff it is even numbered in C1 and the line of argument does not contain a subsequent odd chain.
(c) An argument is subjectively acceptable iff the line of argument contains an odd chain, Cn, where n > 1.

6.7, The preferred extension of a cycle with only two values comprises:
(i) the odd numbered arguments of all chains preceded by an even chain;
(ii) the odd numbered arguments of chains with the preferred value;
(iii) the even numbered arguments of all other chains.
Note that those included under (i) are objectively acceptable and those included under (ii) and (iii) are subjectively acceptable. The even numbered arguments of a chain preceded by an even chain are indefensible.

9.1-9.4, Persuasion in Practical Argument Using Value-based Argumentation Frameworks

Notes taken from ‘Persuasion in Practical Argument Using Value-based Argumentation Frameworks’ by Trevor J. M. Bench-Capon (2003)

“… In (many) cases(s) the dispute can only be resolved through choosing a preference…”

1, Introduction

assent through persuasion rather than intellectual coercion… When a case is brought to court, it is because the two parties disagree about what should be done in the light of some set of particular circumstances… Their arguments may all be sound. But their arguments will not have equal value for the judge charged with deciding the case… A key element in persuasion is identifying the value conflict at the root of the disagreement… Perelman makes much of the fact that an argument is addressed to an audience: in many cases this will be a particular audience with a particular set of values, and a particular ranking of them…

2, Standard argumentation frameworks

2.1-2.5, Definitions of an argumentation framework (AF), acceptable arguments, conflict-free sets of arguments, admissible sets of arguments and preferred extensions are as in Dung’s argumentation framework.

The notion of a preferred extension is interesting because it represents a consistent position within AF, which can defend itself against all attacks and which cannot be further extended without introducing a conflict. We can now view a credulous reasoner as one who accepts an argument if it is in at least one preferred extension and a sceptical reasoner as one who accepts an argument only if it is in all preferred extensions… every AF has a preferred extension (possibly the empty set), and that it is not generally true that an AF has a unique preferred extension. In the special case where there is a unique preferred extension we say that the dispute is resoluble, since there is only one set of arguments capable of rational acceptance.

2.6, If AF = (AR, attacks) where AR is finite and attacks contains no self-attacks, has two (or more) preferred extensions, then the directed graph of AF contains a simple directed cycle of even length.

3, Persuasion in a standard argumentation framework

… From figure 1 we can see that there are two preferred extensions: {P, Q, R, P -> Q, Q -> R} and {~P, Q, R, ~P -> Q, Q -> R}.

We can therefore see that Q and R are sceptically acceptable and P and ~P are credulously acceptable. This means that we should be able to persuade someone to accept Q (and also R). Suppose we assert Q: our interlocutor may challenge this with ~Q. We attack this with P -> Q. He in turn attacks this with ~P. I concede not P, and attack ~Q with ~P -> Q. Now my opponent cannot attack this with P, since this is attacked by the already asserted ~P. Therefore my opponent should be persuaded of the truth of Q.

What of a credulously acceptable argument, such as P? Here I cannot persuade my opponent because he can counter with ~P, and I have no independent way of arguing against ~P. So here I cannot persuade my opponent that P should be accepted, but neither can I be persuaded that it should be abandoned. There is no rational way of choosing between P and ~P; it is an empirical fact which must be determined by observation

4, Practical reasoning

In practical reasoning an argument often has the following form:

Action A should be performed in circumstances C, because the performance of A in C would promote some good G.

This kind of argument can be attacked in a number of ways:
- It may be that circumstances C do not obtain; or it may be that performing A in C would not promote good G. These are similar to the way in which a factual argument can be attacked in virtue of the falsity of a premise, or because the conclusion does not follow from the premise.
- Alternatively it can be attacked because performing some action B, which would exclude A, would also promote G in C. This is like an attack using an argument with a contradictory conclusion.
- However, a practical argument like the one above can be attacked in two additional ways: It may be that G is not accepted as a good worthy of promotion, or that performing action B, which would exclude performing A, would promote a good H in C, and good H is considered more desirable than G. The first of these new attacks concerns the ends to be considered, and the second the relative weight to be given to the ends…

… if an argument attacks an argument whose value is preferred it can be accepted, and yet not defeat the argument it attacks. Thus we can, for arguments which derive their force from the promotion of a value, distinguish between attack and defeat (a successful attack)…

Thursday, 8 March 2007

Language Literals

What exactly is a “literal”? When we speak of a “literal”, do we speak of it as a language construct irrespective of its truth value? For example, is "~a" a literal as well as "a"? If not, would the term "proposition" be more apt when referring to the truth value of "literals"?

It is either an atom or the negation of an atom. An atom is the "smallest" formula in a logic-based language, namely something that can be either true or false. An atom can be equated to a proposition in propositional logic (logic without any variables and quantifiers). Most of the work in the [argumentation] literature is about propositional logic indeed, although some (including assumption-based argumentation) is not.

Tuesday, 6 March 2007

Rebuttal and Undercutting Attacks

For the descriptions below assume we have a system of only two agents, x1 and x2, with knowledge bases
KB(x1) = a <- b, a <- c, c
KB(x2) = ~a, ~b
where the set of candidate assumptions is {b}. The two agents are engaged in a dialectical argumentation process wherein x1 is attempting to defend an argument for the proposition 'a' and x2 is attempting to attack it.

Undercutting Attack
Assume agent x1 presents the argument
({a <- b, b}, a)
where 'b' is an assumption made by x1. Agent x2 can and will successfully undercut/attack this argument by putting forward the assumption-free argument
({~b}, ~b).

Rebuttal Attack
Assume agent x1 presents the assumption-free argument
({a <- c, c}, a).
Agent x2 can rebut/attack this argument by putting forward its own assumption-free argument
({~a}, ~a).
Note that x1 could potentially counter-attack this attack by putting forward the original argument again, unless there is a restriction specified by the argumentation protocol. Likewise, x2 could attack the counter-attack with the same argument again, and so on indefinitely.

Reducing a rebuttal to an undercutting attack
If we redefine the knowledge bases of agents x1 and x2 as follows:
KB(x1) = a <- b, a <- c ^ alpha, c
KB(x2) = ~a <- beta, ~b <- gamma
where the set of candidate assumptions is {b, alpha, beta, gamma} and the contrary of b is ~b, the contrary of alpha is ~a, the contrary of beta is a and the contrary of gamma is b.

Now if agent x1 presents the argument
({a <- c ^ alpha, c, alpha}, a)
Agent x2 can undercut/attack this argument by putting forward the argument
({~a <- beta, beta}, ~a).
Note that the culprit here is not the literal 'a' but the assumption 'alpha'.

Monday, 5 March 2007

Dialogue Proof Procedures; The Semantic Web

Some thoughts following on from 05 March's supervisor meeting.

Dialectical Proof Procedures For Assumption-Based, Admissible Argumentation

“The set of assumptions A is a subset of the set of all literals that do not occur as the conclusion of any inference rule in R.” Isn’t this a bit silly at times? Since, if there is no knowledge of a literal, it should always be possible to make an assumption for its truth value.

There may be certain literals for which we do not want to make an assumption.

“(5) There is no infinite sequence of consecutive nodes all of which are proponent nodes.” Why is this important and why not for the opponent as well? How can the procedure tell that a branch will be infinite?

We are looking at things from the proponent’s perspective and thus the opponent can have sloppy arguments in places. This is not the case for the proponent.

“(Fig. 3) Concrete dispute tree for p for the assumption-based framework with p <- p.” Why is this (infinite) concrete dispute tree admissible? “… whether the selection function is patient or not, a concrete dispute tree must expand all of the proponent’s potential arguments into complete arguments…” Why isn’t there such a condition for the opponent nodes?

Similar to the response above, the opponent argument can be sloppy in places whereas the proponent argument (and defences) must be defensible in all branches of the tree.

“… by separating the search space from the search strategy, our approach might help to identify different ways in which a proponent might be able to win an argument, while minimising the conflict (culprit set) with the opponent.” What is a “search space” and a “search strategy”?

The way I understand it is, the search space is the collection of all the choice points whereas the search strategy is a way of determining which choice point to unfold next.

The Semantic Web
Do some reading but don’t delve into it too much. There are issues where the argumentation community is concerned, with their use of ontologies for example. ‘A Semantic Web Primer’ is a good starting point.