Thursday 8 February 2007

Argument-Based Decision Making using Defeasible Logic Programming

Thoughts following on from Carlos Ivan Chesnavar’s talk at Imperial entitled ‘Argument-Based Decision Making using Defeasible Logic (DeL) Programming’.

In what follows, when referring to DeL programming treat '<-' (or '->') as implication for a strict rule and '<' (or '>') as implication for a defeasible rule. Treat '/' as inference, '~' as negation (propositional or negation by failure) and '/' as the contrary symbol.

Another point of note, a deductive system is a tuple where L is the language (the set of allowed "expressions": a, b, a^b, a^(a^b), etc.) and R is the set of rules of the form ( a / b) including conjunction ( (a, b) / (a ^ b) ), modus ponens ( (a, (a -> b)) / b ) etc.

For defeasible rules (like ‘a < b, ~c’), is it correct that the conclusion of the rule is defeasible and not the rule itself? Is there any work on defeating the existence of the rules (strict or defeasible) themselves?

Yes and don’t know respectively.

As for the former question, suppose we have a DeL program consisting of the following rules:
(a < b, ~c), (b <-), (~c <-), (~a <-)
Then, although we can build an argument {b, ~c} for the conclusion (a), this is built from a defeasible rule and is thus “defeated” by the strict rule (~a <-).

As for the latter question, it doesn’t really make sense for defeasible rules since the rules are by nature defeasible. A defeasible rule can be used as long as it does not contradict the conclusions of any strict rules.

Is this work of DeLP orthogonal to or compatible with Assumption-Based (AB) argumentation?

There is a “mapping” between the two but then there are differences.

As for the mapping, take as an example a DeL program as follows:
(p > q), (> p), (r > ~q), (> r).
Using this program we can define arguments for (q) and (~q) as follows:
{(> p), (p > q)} and {(> r), (r > ~q)} respectively.

The above DeL program can be mapped to an AB deductive system consisting of the following inference rules:
( (p, alpha) / q ), ( beta / p ), ( (r, gamma) / ~q ), ( delta / r )
Where alpha, beta, gamma and delta are the possible assumptions.

Next, contrary relations have to be defined between the assumption and non-assumption predicates to complete the mapping between the DeL program and the AB system, as follows:
(alpha = ~q), (beta = ~p), (gamma = q), (delta = ~r).

Using these set of rules we can define arguments for (q) and (~q) as follows:
{alpha, beta} and {gamma, delta} respectively.
Note that the arguments “undercut” each other since the conclusion ~q attacks the support for the conclusion q (i.e. alpha) and the conclusion q attacks the support for the conclusion ~q (i.e. gamma).

As a further illustration of the mapping between DeL programs and AB deductive systems, if the defeasible rule (> p) in the DeL program was a strict rule (-> p) instead, all that would be required would be to replace the rule (beta / p) in the AB system with the rule ( / p).

One difference between the two is that DeL programming depends upon arguments being “minimal” whereas the work of AB argumentation does not. As an illustration of non-minimal arguments for each of the approaches, consider the example above:
- {(> p), (> r), (p > q)} is a non-minimal argument for (q) in the DeL program since (> r) serves no purpose in supporting the conclusion (q).
- {alpha, beta, gamma} is a non-minimal argument for (q) in the AB system since gamma serves no purpose in supporting the conclusion (q).
Requiring arguments to be minimal is not desired at an implementation level since checking whether an argument is minimal or not can be computationally expensive.

Another difference is in the semantics; for DeL programming there is none.

He said, “the preferences are among argument structures and not rules”. What is meant by argument structure here?

Not sure.

How do the preferences work?

The preferences are drawn from “specificity”. As an example, consider a knowledge-base of defeasible rules as follows:
(a -> b), (a > k), (b > k), (a), (b).
Now, using this we can construct an argument for (k) using the rules (a > k) and (a), but we can also construct an argument for its contrary (k) using the rules (b > k) and (b). So which do we “prefer”? In this particular example we prefer (k) because of the rule (a -> b), which implies that (a), the basis of the conclusion (k), is more specific than (b), the basis of the conclusion (k). This can be seen as ancestral in the sense that eldest is best.

In a “blocking situation”, where two arguments are equally preferred, how is the conflict resolved? Especially given that he said “the system is sceptical from the beginning”. What semantical notion is used in their framework?

Not sure and none respectively; the framework currently has no semantics.

He said that the “Defeasible Logic Programming can be considered a deviation from Logic Programming” and that the “defeasible rules behave like assumptions in some sense, and don’t in some sense”. How are assumptions represented in their framework? Are they?

Yes, in the sense that a defeasible rule can be “assumed” to hold in the absence of strict rules alerting to the contrary.

Further Work
Read, understand and adapt the recent paper (2007) by Elizabeth Black and Anthony Hunter entitled ‘A Generative Inquiry Dialogue System’ to work for:
- Assumption-based argumentation instead of Defeasible Logic Programming.
- Dialogues constraints instead of a strategy function.
However, the same protocol should be used, and the resulting work should be able to prove the same results.

No comments: