Wednesday 25 April 2007

Practical Reasoning

Taken from Chapter 4 of 'Persuasion in Practical Argument Using Value-based Argumentation Frameworks' (2003) Trevor Bench-Capon

In practical reasoning an argument often has the following form:

Action A should be performed in circumstances C, because the performance of A in C would promote some good G.

This kind of argument can be attacked in a number of ways:
- It may be that circumstances C do not obtain; or it may be that performing A in C would not promote good G. These are similar to the way in which a factual argument can be attacked in virtue of the falsity of a premise, or because the conclusion does not follow from the premise.
- Alternatively it can be attacked because performing some action B, which would exclude A, would also promote G in C. This is like an attack using an argument with a contradictory conclusion.
- However, a practical argument like the one above can be attacked in two additional ways: It may be that G is not accepted as a good worthy of promotion, or that performing action B, which would exclude performing A, would promote a good H in C, and good H is considered more desirable than G. The first of these new attacks concerns the ends to be considered, and the second the relative weight to be given to the ends...

Tuesday 24 April 2007

19, Assumption-based argumentation for epistemic and practical reasoning

Notes taken from 'Assumption-based argumentation for epistemic and practical reasoning' (2007), by Francesca Toni

"Assumption-based argumentation can serve as an effective computational tool for argumentation-based epistemic and practical reasoning, as required in a number of applications. In this paper we substantiate this claim by presenting formal mappings from frameworks for epistemic and practical reasoning onto assumption-based argumentation frameworks..."

1, Introduction

... In this paper, we consider two forms of reasoning that rational agents may need to perform, namely reasoning as to which beliefs they should hold (epistemic) and reasoning as to which course of action/decision they should choose (practical)...

2, Abstract and assumption-based argumentation...

3, Epistemic Reasoning...

3.1, Epistemic frameworks without preference rules...

3.2, Epistemic frameworks with preference rules...

4, Practical reasoning...

5, Example...

6, Conclusions

We have proposed concrete instances of assumption-based argumentation for epistemic reasoning... and practical reasoning...

... Within the ARGUGRID project, our approach to (epistemic and) practical reasoning can be used to model decisions concerning the orchestration of services available over the grid, taking into account preferences by the users and/or the service providers...

Monday 23 April 2007

The Big Question

After a discussion with fellow PhD students and after spending the last two months bogged down with nitty gritty details about argumentation structure and semantics, it is time to beg the question, what is the big question?

Drawing inspiration from 'Getting to Yes: Negotiating Agreement Without Giving In', the big question will stem from this one, "What is the best way for agents to deal with their differences?"

So that's what the next few weeks will be dedicated to, defining the big question.

Sunday 15 April 2007

18, A Semantic Web Primer

Summary of ‘A Semantic Web Primer’ by Grigoris Antoniou and Frank van Harmelen (2004)

1, The Semantic Web Vision

- The Semantic is an initiative that aims at improving the current state of the World Wide Web.
- The key idea is the use of machine-processable Web information.
- Key technologies include explicit metadata, ontologies, logic and inferencing, and intelligent agents.
- The development of the Semantic Web proceeds in layers.

2, Structured Web Documents in XML

- XML is a metalanguage that allows users to define markup for their documents using tags.
- Nesting of tags introduces structure. The structure of documents can be enforced schemas or DTDs.
- XML separates content and structure from formatting.
- XML is the de facto standard for the representation of structured information on the Web and supports machine processing of information.
- XML supports the exchange of structured information across different applications through markup, structure, and transformations.
- XML is supported by query languages.

Some points discussed in subsequent chapters include:
- The nesting of tags does not have standard meaning.
- The semantics of XML documents is not accessible to machines, only to people.
- Collaboration and exchange are supported if there is an underlying shared understanding of the vocabulary. XML is well-suited for close collaboration, where domain- or community-based vocabularies are used. It is not so well suited for global communication.

3, Describing Web Resources in RDF

- RDF provides a foundation for representing and processing metadata.
- RDF has a graph-based data model. Its key concepts are resource, property, and statement. A statement is a resource-property-value triple.
- RDF has an XML-based syntax to support syntactic interoperability. XML and RDF complement each other because RDF supports semantic interoperability.
- RDF has a decentralised philosophy and allows incremental building of knowledge, and its sharing and reuse.
- RDF is domain-independent. RDF Schema provides a mechanism for describing specific domains.
- RDF Schema is a primitive ontology language. It offers certain modelling primitives with fixed meaning. Key concepts of RDF Schema are class, subclass relations, property, subproperty relations, and domain and range restrictions.
- There exist query languages for RDF and RDFS.

Some points that will be discussed in the next chapter:
- RDF Schema is quite primitive as a modelling language for the Web. Many desirable modelling primitives are missing.
- Therefore we need an ontology layer on top of RDF/RDFS.

4, Web Ontology Language: OWL

- OWL is the proposed standard for Web ontologies. It allows us to describe the semantics of knowledge in a machine-accessible way.
- OWL builds upon RDF and RDF Schema: (XML-based) RDF syntax is used; instances are defined using RDF descriptions; and most RDFS modelling primitives are used.
- Formal semantics and reasoning support is provided through the mapping of OWL on logics. Predicate logic and description logics have been used for this purpose.

While OWL is sufficiently rich to be used in practice, extensions are in the making. They will provide further logical features, including rules.

5, Logic and Inference: Rules

- Horn logic is a subset of predicate logic that allows efficient reasoning. It forms a subset orthogonal to description logics.
- Horn logic is the basis of monotonic rules.
- Non-monotonic rules are useful in situations where the available information is incomplete. They are rules that may be overridden by contrary evidence (other rules).
- Priorities are used to resolve some conflicts between non-monotonic rules.
- The representation of rules in XML-like languages is straightforward.

Friday 13 April 2007

Agents, AI and the Semantic Web

Quotes taken from 'A Semantic Web Primer' (2004), by Grigoris Antoniou and Frank van Harmelen

(page 199, AI and Web Services)

Web services are an application area where Artificial Intelligence techniques can be used effectively, for instance, for matching between service offers and service requests, and for composing complex services from simpler services, where automated planning can be utilized.

(page 223, How it all fits together)

... we consider an automated bargaining scenario to see how all technologies discussed fit together.

- Each bargaining party is represented by a software agent...
- The agents need to agree on the meaning of certain terms by committing to a shared ontology, e.g., written in OWL.
- Case facts, offers, and decisions can be represented using RDF statements. These statements become really useful when linked to an ontology.
- Information is exchanged between the agents in some XML-based (or RDF-based) language.
- The agent negotiation strategies are described in a logical language.
- An agent decides about the next course of action through inferring conclusions from the negotiation strategy, case facts, and previous offers and counteroffers.

Predicate Logic, Nonmonotonic Rules and Priorities

Quotes taken from 'A Semantic Web Primer' (2004), by Grigoris Antoniou and Frank van Harmelen

(page 94, An axiomatic semantics for RDF and RDF Schema)

... we formalize the meaning of the modeling primitives of RDF and RDF Schema. Thus we capture the semantics of RDF and RDFS.

The formal language we use is predicate logic , universally accepted as the foundation of all (symbolic) knowledge representation. Formulas used in the formalization are referred to as axioms.

By describing the semantics of RDF and RDFS in a formal language like logic we make the semantics unambiguous and machine accessible. Also, we provide a basis for reasoning support by automated reasoners manipulating logical formulas.

(page 161, Nonmonotonic rules: Motivation and syntax)

... we turn our attention to nonmonotonic rule systems. So far (i.e. with monotonic rules), once the premises of a rule were proved, the rule could be applied and its head could be derived as a conclusion. In nonmonotonic rule systems, a rule may not be applied even if all premises are known because we have to consider contrary reasoning chains. In general, the rules we consider from now are called defeasible, because they can be defeated by other rules. To allow conflicts between rules, negated atomic formulas may occur in the head and the body of rules...

... To distinguish between defeasible rules and standard, monotonic rules, we use a different arrow:

p(X) => q(X)
r(X) => ¬q(X)

In this example, given also the facts

p(a)
r(a)

we conclude neither q(a) nor ¬q(a). It is a typical example of two rules blocking each other. This conflict may be resolved using priorities among rules. Suppose we knew somehow that the first rule is stronger than the second; then we could indeed derive q(a).

Priorities arise naturally in practice, and may be based on various principles:
- The source of one rule may be more reliable than the source of the second, or may even have higher priority. For example, in law, federal law preempts state law...
- One rule may be preferred over another because it is more recent.
- One rule may be preferred over another because it is more specific. A typical example is a general rule with some exceptions; in such cases, the exceptions are stronger than the general rule.

Specificity may often be computed based on the given rules, but the other two principles cannot be determined from the logical formalization. Therefore, we abstract from the specific prioritization principle used, and assume the existence of an external priority relation on the set of rules. To express the relation syntactically, we extend the rule syntax to include a unique label, for example,

r1: p(X) => q(X)
r2: r(X) => ¬q(X)

Then we can write

r1 > r2

to specify that r1 is stronger than r2.

We do not impose many conditions on >. It is not even required that the rules form a complete ordering. We only require the priority relation to be acyclic. That is, it is impossible to have cycles of the form

r1 > r2 > ... rn > r1

Note that priorities are meant to resolve conflicts among competing rules. In simple cases two rules are competing only if the head of one rule is the negation of the head of the other. But in applications it is often the case that once a predicate p is derived, some other predicates are excluded from holding...

Wednesday 4 April 2007

17, Information-seeking agent dialogs with permissions and arguments

Notes taken from ‘Information-seeking agent dialogs with permissions and arguments’ (2006), by Sylvie Doutre et al.

“… Many distributed information systems require agents to have appropriate authorisation to obtain access to information… We present a denotational semantics for such dialogs, drawing on Tuple Centres (programmable Tuple Spaces)…”

1, Introduction

… we present a formal syntax and semantics for such information-seeking dialogs involving permissions and arguments…

2.1, Dialog systems

The common elements of dialog systems are…

A typology of human dialogs was articulated by Walton and Krabbe, based upon the overall goal of the dialogue, the participants’ individual dialog goals, and the information they have at the commencement of the dialog (the topic language and the context)…

2.2, Tuple spaces

… a model of communication between distributed computational entities… The essential idea is that computational agents connected together may create named object stores, called tuples, which persist, even beyond the lifetimes of their creators, until explicitly deleted… They are stored in tuple spaces, which are black-board-like shared data stores, and are normally accessed by other agents by associative pattern matching… There are three basic operators on tuple spaces: out, rd, in…

2.3, LGL as a semantics for dialog systems

… (We) show how Law-Governed Linda (LGL) can be used as a denotational semantics for these systems, by associating elements of an LGL 5-tuple to the elements of the dialog system. Note that the dialog goal and the outcome rules have no associated elements in LGL…

3, Secure info-seek dialogue

3.1, Motivating example…

3.2, Protocol syntax

… In this system, an argument must be provided by an agent to justify it having permission to access some information. If access to information for agent x is refused by agent y, then agent x must try to persuade agent y that it should be allowed permission. This persuasion is made using arguments. If agent y yields to agent x’s arguments, then y provides x the information requested.

(Definitions given for Participants, Dialog goal, Context, Topic language, Communication language, Protocol, Effect rules, Outcome rules)

3.3, LGL semantics

(Associations to elements of an LGL 5-tuple given for elements of the dialog system: Participants, Context, Communication language, Protocol, Effect rules)

3.4, Illustration…

4, Implementation

In Section 1, we stated that our primary objective was the development of a semantics for these Information-seeking dialogs which facilitated implementation of the protocol. In order to assess whether the protocol and semantics of Section 3 met this objective, we undertook an implementation…

5, Related work and conclusions

… Our contribution in this paper is a novel semantics for information-seeking agent communications protocols involving permissions and arguments, in which utterances under the protocol are translated into commands in Law-Governed Linda and, through them, into actions on certain associated tuple spaces…

Tuesday 3 April 2007

ACL and dialog protocol semantics

Quoted from ‘Information-seeking agent dialogs with permissions and arguments’ (2006), by Sylvie Doutre et al.

There are several different functions that a semantics for an agent communications language or dialog protocol may be required to serve:

- To provide a shared understanding to participants in a communicative interaction of the meaning of individual utterances, of sequences of utterances, and of dialogues.

- To provide a shared understanding to designers of agent protocols and to the designers (who may be different) of agents using those protocols of the meaning of individual utterances, of sequences of utterances, and of dialogues.

- To provide a means by which the properties of languages and protocols may be studied formally and with rigor, either alone or in comparison with other languages or protocols.

- To provide a means by which languages and protocols may be readily implemented.

In this paper, our focus is on semantics for agent protocols which meet this last objective... Rogier van Eijk identified thee generic types of semantics of agent communication languages (axiomatic, operational and denotational).

16, Dialogues for Negotiation

Notes taken from ‘Dialogues for Negotiation: Agent Varieties and Dialogue Sequences’ (2001), by Fariba Sadri, Francesca Toni and Paolo Torroni

“… (The proposed solution) relies upon agents agreeing solely upon a language of negotiation, while possibly adopting different negotiation policies, each corresponding to an agent variety. Agent dialogues can be connected within sequences, all aimed at achieving an individual agent’s goal. Sets of sequences aim at allowing all agents in the system to achieve their goals…”

1, Introduction

… Many approaches in the area of one-to-one negotiation are heuristic-based and, in spite of their experimentally proven effectiveness, they do not easily lend themselves to expressing theoretically provable properties. Other approaches present a good descriptive model, but fail to provide an execution model that can help to forecast the behaviour of any corresponding implemented system…

… Note that we do not make any concrete assumption on the internal structure of agents, except for requiring that they hold beliefs, goals, intentions and, possibly, resources.

2, Preliminaries

2.1, A performative or dialogue move is an instance of a schema tell(X, Y, Subject, T)… e.g. tell(a, b, request(give(nail)), 1)…

2.2, A language for negotiation L is a (possibly infinte) set of (possibly non ground) performatives. For a given L, we define two (possibly infinte) subsets of performatives, I(L) and F(L)…, called respectively initial moves and final moves. Each final move is either successful or unsuccessful.

2.3, An agent system is a finite set A, where each x in A is a ground term, representing the name of an agent, equipped with a knowledge base K(x).

3, Dialogues

3.4, Given an agent system A, equipped with a language for negotiation L, and an agent x in A, a dialogue constraint for x is a (possibly non-ground) if-then rule of the form: p(T) & C => p’(T + 1), where… The performative p(T) is referred to as the trigger, p’(T + 1) as the next move and C as the condition of the dialogue constraint.

3.5, A dialogue between two agents x and y is a set of ground performatives, {p0, p1, p2, …}, such that… A dialogue {p0, p1, … pM)… is terminated if pM is a ground final move…

3.6, A request dialogue wrt a resource R and an intention I of agent x is a dialogue… such that…

3.7, (Types of terminated request dialogues) Let I be the intention of some agent a, and R be a missing resource in I. Let d be a terminated resource dialogue wrt R and and I, and I’ be the intention resulting from d. Then, if missing(Rs), plan(P) are in I and missing(Rs’), plan(P’) are in I’:
i) d is successful if P’ = P, Rs’ = Rs \ {R};
ii) d is conditionally or c-successful if Rs’ /= Rs and Rs’ /= Rs \ {R};
iii) d is unsuccessful if I’ = I.
Note that, in the case of c-successful dialogues, typically, but not always, the agent’s plan will change (P’ /= P).

3.8, An agent x in A is convergent iff, for every terminated request dialogue of x, wrt some resource R and some intention I, the cost of the returned intention I’ is not higher than the cost of I. The cost of an intention can be defined as the number of missing resources in the intention.

4, Properties of Agent Programs

4.9, An agent x in A is deterministic iff, for each performative p(t) which is a ground instance of a schema in L(in), there exists at most one performative p’(t+1) which is a ground instance of a schema in L(out) such that ‘p(t) & C => p’(t+1)’ is in the agent program S and ‘K & p(t)’ entails C.

4.10, An agent program S is non-overlapping iff for each performative p which is a ground instance of a schema in L(in), for each C, C’in S(p) such that C /= C’, then C ^ C’ = false.

(Theorem 1) If the (grounded) agent program of x is non-overlapping, then x is deterministic.

4.11, An agent x in A is exhaustive iff, for each performative p(t) which is a ground instance of a schema in L(in) \ F(L), there exists at least one performative p’(t+1) which is a ground instance of a schema in L(out) such that ‘p(t) & C => p’(t+1)’ is in S and ‘K & p(t)’ entails C.

4.12, Let L(S) be the set of all (not necessarily ground) performatives p(T) that are triggers in dialogue constraints:
L(S) = {p(T) | there exists ‘p(t) & C => p’(t+1)’ in S}. (Obviously, L(S) is a subset of L(in)). Then, S /= {} is covering iff for every performative p which is a ground instance of a schema in L(in), the disjunction of C’s in S(p) is ‘true’ and L(S) = L(in) \ F(L).

(Theorem 2) If the (grounded) agent program of x is covering, then x is exhaustive.

5, Agent Varieties: Concrete Examples of Agent Programs…

6, Dialogue Sequences

6.13, A sequence of dialogues s(I) wrt an intention I of an agent x with goal(G) in I is an ordered set {d1, d2, …, dn, …}, associated with a sequence of intentions I1, I2, …, In+1, … such that…

6.14, A sequence of dialogues {d1, d2, …, dn} wrt an initial intention I of an agent x and associated with the sequence of intentions I1, I2, …, In+1 is terminated iff there exists no possible request dialogue wrt In+1 that x can start.

6.15, (Success of a dialogue sequence) A terminated sequence of dialogues {d1, d2, …, dn} wrt an initial intention I of an agent x and associated with the sequence of intentions I1, I2, …, In+1 is successful if In+1 has an empty set of missing resources; it is unsuccessful otherwise.

6.16, Given an initial intention I of agent x, containing a set of missing resources Rs, the agent dialogue cycle is the following…

(Theorem 3) Given an agent x in A, if x’s agent dialogue cycle returns ‘success’ then there exists a successful dialogue sequence wrt the initial intention I of x.

(Theorem 4) Given an agent x with intention I, and a successful dialogue sequence s(I) generated by x’s dialogue cycle, if x is convergent, then the number of dialogues in s(I) is bounded by m.|Rs|, where missing(Rs) is in I and |A \ {x}| = m, A being the set of agents in the system.

7, Using Dialogue Sequences for Resource Reallocation

7.17, (Resource reallocation problem – rrp)
Given an agent system A, with each agent x in A equipped with a knowledge base K(x) and an intention I(x),
- the rrp for an agent x in A is the problem of finding a knowledge base K’(x), and an intention I’(x) (for the same goal as I(x)) such that missing({}) is in I’(x).
- the rrp for the agent system A is the problem of solving the rrp for every agent in A.
A rrp is solved if the required (sets of) knowledge base(s) and intention(s) is (are) found.

(Theorem 5) (Correctness of the agent dialogue cycle wrt the rrp) Let A be the agent system, with the agent programs of all agents in A being convergent. If all agent dialogue cycles of all agents in A return ‘success’ then the rrp for the agent system is solved.

7.18, Let A be an agent system consisting of n agents. Let R(A) be the union of all resources held by all agents in A, and R(I(A)) be the union of all resources needed to make all agents’ initial intentions I(A) executable. A is weakly complete if, given that R(I(A)) is a subset of R(A), then there exist n successful dialogue sequences, one for each agent in A, such that the intentions I’(A) returned by the sequences have the same plans as I(A) and all have an empty set of missing resources.

8, Conclusions…

Monday 2 April 2007

Protocols and Strategies

In 'A Generative Inquiry Dialogue System', the assert and open functions seem to allow the agent to assert or open (respectively) anything as long as the conclusion of the assert/open move is in the current question store and the move has not been asserted already. It is only the strategy that sets down the (proper) rules for what can and cannot be asserted/opened, and then selects one of these move. Is this a good approach? Shouldn’t the protocol play a larger role (in actually defining what are the legal moves) and the strategy play lesser role (in only selecting one of these legal moves)?

The reason for the simplicity of the protocol and not including internal agent reasoning (as found in the strategy) is to allow external checking of conformance to the protocol: "Note that it is striaghtforward to check conformance with the protocol as the protocol only refers to public elements of the dialogue."

Commitment stores

In the 'A Generative Inquiry Dialogue System' paper, a commitment store is associated with each agent in terms of what the agent has said during the course of the dialogue. Instead of this, or as well as this, it may be worth associating a commitment store with each agent that stores what the other agents have said to it (and when), and whereby the stored commitments are kept in record beyond the particular dialogue that they were asserted in (i.e. until retracted).

Also, in the 'A Generative Inquiry Dialogue System' approach, as a commitment store consists of things that an agent has already publicly declared during the course of a dialogue, its contents are visible to the other agent participating in the dialogue. Certain questions arise in such an approach:
- What about agents not participating in the dialogue, do they have access to the commitment stores of the participatory agents?
- Do the commitments exists outside the dialogue that they were asserted in?
- Does each agent have a separate commitment store for each dialogue that it participates in?