Wednesday 28 February 2007

Inquiry Dialogue System; Dialectical Proof Procedures

Thoughts following on from 27 February’s supervisor meeting.

‘A Generative Inquiry Dialogue System’ (2007 – Elizabeth Black, Anthony Hunter)

“… A commitment store is a set of beliefs that the agent has asserted so far in the course of the dialogue…” Can commitments/assertions be retracted if not relevant to the final argument construction? Can something like provisional commitments be included?

This isn’t really necessary. Once a commitment has been made public it does not make sense to retract it unless it is proven to be wrong, which is a different subject altogether.

“An agent’s commitment store grows monotonically over time. If an agent makes a move asserting an argument, every element of the support is added to the agent’s commitment store. This is the only time the commitment store is updated…” Isn’t there a risk here of irrelevance and non-minimality of the final main argument?

Yes, because a chosen path may prove to be wrong and assertions may have been made along that path. However, not much can be done about this.

“In order to select a single one of the legal assert or open moves, we assign a unique number to the move content and carry out a comparison of these numbers.” Do these numbers or comparisons have any meaning in terms of betterness and priority? In other words, does it mean anything for one move to be chosen or preferred over another?

No, this is not the point, as explained below.

“Let us assume that B is composed of a finite number Z of atoms. Let us also assume that there is a registration function ‘mu’ over these atoms. So, for a literal L, mu(L) returns a unique single digit number base Z… For a rule ‘L1 ^ … ^ Ln -> Ln+1’, mu(L1 ^ … ^ Ln -> Ln+1) is an n+1 digit number of the form mu(L1)…mu(Ln)mu(Ln+1). This gives a unique base Z number for each formula in B and allows us to choose a unique open move (i.e. the move whose mu value is least).” What are these functions all about? Where did they come from? Do they have any meaning? Or do they have an element of randomness about them? On what basis are numbers assigned to atoms, given that a lower value (of an atom, rule or argument) is chosen by the selection function in favour of a higher value? Do you know if these assigned numbers (and comparisons) have any meaning in terms of preference and priority? Or do they have an element of randomness about them?

As far as I understand, they have this feature to guarantee determinism in the various selections (Pick) that they have. It is a bit long winded and unnecessary. Any other mechanism to do so would also work. A simpler method, such as selecting the first item in a list, would do the job just as fine. They need determinism as this is one of their requirements for the strategy (and I also like this for dialogue constraints - see ATAL 2001 paper).

“Note that a top-level argument inquiry dialogue will always have a defeasible fact as its topic, but each of its sub-dialogues will always have a defeasible rule as its topic.” Sub-dialogues may only open rules (that are present in the agent’s belief), is this constraint necessary?

Perhaps not, especially where privacy and minimising communication is concerned. Rather than opening for a rule like ‘a -> b’ where ‘a’ needs to be established, in my opinion, the agent would be better by more simply initiating a fresh inquiry for ‘a’. This way it would not declare its intent to establish the rule ‘a -> b’ (if that is a problem).

Another point to note is that opening for a rule (like ‘a -> b’) does not add the rule to the commitment store of the agent according to the paper. However, this is absurd, since an agent would not open for a rule if it did not believe it do be true. Thus, in my opinion, opening for a rule should add that rule to the commitment store of the agent.

“… Note that we assume some higher-level planning component that guides the agent when deciding whether to enter into a dialogue, who this dialogue should be with and on what topic, i.e. that makes the decision to make the move m1.” But isn’t this the job of the agent strategy?

No. The agent strategy once takes over once these decisions have been made.

“We define the outcome of an argument inquiry dialogue as the set of all arguments that can be constructed from the union of the commitment stores and whose claims are in the question store.” This is still a lot of post-dialogue work to do, as not all commitments will be relevant and may even be contradictory.

Yes, but this is unavoidable.

‘Dialectical Proof Procedures For Assumption-Based, Admissible Argumentation’ (2005 – P. M. Dung, R. A. Kowalski, F. Toni)

This all seems to be going on inside the head of a single agent (with a single knowledge-base)!? What if the knowledge-bases are different?

Work in this is lacking and this is one area where my PhD will step in.

“(3ii) If ‘delta’ is not an assumption, then there exists some inference rule S/’delta’ (that is a member of R) and there exists exactly one child of N, which…” There may be many such rules, so which one does it pick and why? Why only one when that selection can potentially lead to failure?

At an implementation and search space level this could be achieved by means of allowing backtracking, as in Prolog.

“… either (4ia) ‘sigma’ is ignored and there exists exactly one child of N, which…or (4ib) ‘sigma’ is a culprit, and there exists exactly one child of N, which…” Ignored? Culprit? Why this sentence? On what basis? How can it know before trying? What if it’s wrong and it can’t counter-attack this sentence?

Again, as in the previous response, it does not know until it tries. The selection would be a result of some decision-making in the search space of possible moves.

“(5) There is no infinite sequence of consecutive nodes all of which are proponent nodes.” Why is this important and why not for the opponent as well? How can the procedure tell that a branch will be infinite?

Don’t know. This will be looked into for the next meeting.

“… Empty multi-sets O, however, can occur in failed parts of the search space, either because…” What does it mean for a part of the search to be failed? Does that mean the proponent’s overall counter-attack effort is unsuccessful, or that the proponent made an amendable wrong move somewhere along the way?

Note here that the failing is in the search space and not the tree itself, which allows for backtracking. If all possible routes in the search space fail only then will the proponent be unable to successfully counter-attack the opponent.

“… in addition to nodes that simply add concrete inference steps, a concrete dispute tree may contain extra branches that represent potential arguments by the opponent that fail…” How about potential arguments by the proponent that fail (for the same reasons)?

This occurs for the opponent because all seemingly viable inference rules are selected (step 4ii). This would not occur for the proponent because “some inference rule” is selected (i.e. only one), and this selection is linked to the previous comments about ignoring assumptions, backtracking in the search space etc.

“(Fig. 3) Concrete dispute tree for p for the assumption-based framework with p <- p.” Why is this (infinite) concrete dispute tree admissible? “… whether the selection function is patient or not, a concrete dispute tree must expand all of the proponent’s potential arguments into complete arguments…” Why isn’t there such a condition for the opponent nodes?

Don’t know. This will be looked into for the next meeting.

“Finding a culprit is dealt with in case 4(i) of Definition 6.1, by choosing whether to ignore or counter-attack the selected assumption in an opponent node.” Choosing? How? “Finding a way of attacking the culprit is dealt with by the search strategy. Our definition leaves the search strategy undetermined.” Search strategy? What’s that?

The former pair of questions has been answered above. As for the latter pair of questions, I would presume that a search strategy defines how to proceed through the search space.

I understand the difference between and purpose of abstract and concrete trees, I think anyway. But don’t understand the purpose of dispute derivations and difference between it and dispute trees?

They do seem to be doing the same thing. However, in addition, the dispute derivations maintain a list of assumptions and culprits encountered so far. This allows for filtering of defences/culprits by defences/culprits, which has some interesting properties.

“(1i) If ‘sigma’ (that is a member of Pi) is selected then…” “(2) If S is selected in Oi and ‘sigma’ is selected in S then…” On what basis could/would a proposition be selected?

Any approach would do; for example, selecting the first proposition in a list.

“(1ii)… there exists some inference rule R/’sigma’…” There may be many such rules. Which one would be chosen? What if there is no such rule; is it then a non-admissible belief?

The former question has been answered above. As for the latter question, this would mean that the algorithm fails; implying that an initial argument cannot be built to support the conclusion or that a defensive argument cannot be built to counter-attack an opponent’s attack.

Miscellaneous discussion

Look into the Abductive LogIc Agent System (ALIAS). In particular, ‘Cooperation and Competition in ALIAS: A Logic Framework for Agents that Negotiate’ (2003).

You defined completeness of a procedure with respect to a semantics as follows: “If there is some ok outcome with respect to the semantics then the procedure finds it”. Does this (completeness) mean that the procedure can find all such viable solutions, or that the procedure will find at least one such solution but may not necessarily be able to find all possible solutions?

Correctness/soundness of a procedure with respect to a semantics:
1) Of success: if the procedure succeeds then the outcome is ok (according to the semantics).
2) Of failure: if the procedure fails then no outcome is ok (according to the semantics).

Completeness of a procedure with respect to a semantics is linked to soundness of failure: For every possible outcome, if the outcome is ok with respect to the semantics then the procedure finds it.

Also, I remember seeing mention of "strong" completeness somewhere previously. I can't remember where. Do you know of any such differentiation of "strong" and "weak" completeness?

It might have been related to selection functions: E.g., in strong completeness the selection function is chosen first before applying the procedure, in weak completeness it is chosen afterwards, when applying the procedure. Have a look at John Lloyd’s book on Foundations of Logic Programming for this.

No comments: