Saturday 31 March 2007

15.4, A Generative Inquiry Dialogue System

Notes taken from ‘A Generative Inquiry Dialogue System’ (2007), by Elizabeth Black and Anthony Hunter

4, Dialogue System

The communicative acts in a dialogue are called moves. We assume that there are always exactly two agents (participants) taking part in a dialogue… Each participant takes its turn to make a move to the other participant…

A move in our system is of the form (Agent, Act, Content)… e.g. (x, ‘open’, belief), (x, ‘assert’, (support, conclusion)), (x, ‘close’, belief)…

4.1, A dialogue… is a sequence of moves of the form… (The first move of a dialogue… must always be an open move…, every move of the dialogue must be made to a participant of the dialogue…, and the agents take it in turns to receive moves…)

4.2, (Terminology that allows us to talk about the relationship between two dialogues: ‘Sub-dialogue of’…, ‘top-level dialogue’…, ‘top-dialogue of’…, and ‘extends’…)

4.3, Let D be a dialogue with participants x1 and x2 such that topic(D) = gamma. We say that m(s)… is a matched-closed for D iff m(s-1) = (P, close, gamma) and m(s) = (~P, close, gamma).

So a matched-close will terminate a dialogue D but only if D has not already terminated and any sub-dialogues that are embedded within D have already terminated.

4.4, Let D be a dialogue. D terminates at t iff the following conditions hold…

4.5, The current dialogue is… (the innermost dialogue that has not yet terminated).

We adopt the standard approach of associating a commitment store with each agent participating in a dialogue. A commitment store is a set of beliefs that the agent has asserted so far in the course of the dialogue. As commitment stores consist of things that the agent has already publicly declared, its contents are visible to the other agent participating in the dialogue. For this reason, when constructing an argument, an agent may make use of not only its own beliefs but also those from the other agent’s commitment store.

4.6, A commitment store associated with an agent… is a set of beliefs.

An agent’s commitment store grows monotonically over time. If an agent makes a move asserting an argument, every element of the support is added to the agent’s commitment store. This is the only time the commitment store is updated.

4.7, Commitment store update

… when an argument inquiry dialogue with topic ‘alpha1 ^ … ^ alphaN -> phi’ is opened, a question store associated with that dialogue is created whose contents are {alpha1, …, alphaN}. Throughout the dialogue the participant agents will both try to provide arguments for the literals in the question store. This may lead them to open further nested argument inquiry dialogues that have a topic a rule consequent is a literal in the question store.

4.8… a question store… is a finite set of literals such that…

A protocol is a function that returns the set of moves that are legal for an agent to make at a particular point in a particular type of dialogue…

4.9, The argument inquiry protocol is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns the set of legal moves that the agent can make)…

(An agent can only assert an argument or open a rule for a conclusion that is in the current Question Store, and such that the move has not been made by either agent at an earlier timepoint in the dialogue.)

Note that it is straightforward to check conformance with the protocol as the protocol only refers to public elements of the dialogue.

… a specific strategy function… allows an agent to select exactly one legal move to make at each timepoint in an argument inquiry dialogue. A strategy is personal to an agent and the move that it returns depends on the agent’s private beliefs. The argument inquiry strategy states that if there are any legal moves that assert an argument that can be constructed by the agent (by means of its belief base and the other agent’s commitment store) then a single one of those moves is selected…, else if there are any legal open moves with a defeasible rule as their content that is in the agent’s beliefs then a single one of these moves is selected… If there are no such moves then a close move is made.

4.10… The function pickO returns the chosen open move

4.11… The function pickA returns the chosen assert move

4.12, The argument inquiry strategy is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns exactly one of the legal moves)…

Note that a top-level argument inquiry dialogue will always have a defeasible fact as its topic, but each of its sub-dialogues will always have a defeasible rule as its topic.

4.13, D is a well-formed argument inquiry dialogue iff… (D does not continue after it has terminated and is generated by the argument inquiry strategy).

… Note that we assume some higher-level planning component that guides the agent when deciding whether to enter into a dialogue, who this dialogue should be with and on what topic, i.e. that makes the decision to make the move m1.

No comments: