Monday, 26 November 2007

Intelligent Design

Source: The Economist, October 20th 2007
Section: Economics Focus
Title: Intelligent Design
Subtitle: A theory of an intelligently guided invisible hand wins the Nobel prize

... despite its dreary name, mechanism design is a hugely important area of economics, and underpins much of what dismal scientists do today. It goes to the heart of one of the biggest challenges in economics: how to arrange our economic interactions so that, when everyone behaves in a self-interested manner, the result is something we all like. The word "mechanism" refers to the institutions and the rules of the game that govern our economic activities...

Mechanism-design theory aims to give the invisible hand a helping hand, in particular by focusing on how to minimise the economic cost of "asymmetric information" - the problem of dealing with someone who knows more than you do...

His [Mr Hurwicz's] big idea was "incentive compatibility". The way to get as close as possible to the most efficient outcome is to design mechanisms in which everybody does best for themselves by sharing truthfully whatever private information they have that is asked for...

37, An implementation of norm-based agent negotiation

Notes taken from 'An implementation of norm-based agent negotiation' (2007), by Peter Dijkstra, Henry Prakken, Kees de Vey Mestdagh

1, Introduction

2, The Problem of Regulated Information Exchange

3, Requirements for the Multi-Agent Architecture

Knowledge: In order to regulate distributed information exchange, agents must have knowledge of the relevant regulations and the local interpretations of those regulations, their goals and the likely consequences of their actions...

Reasoning: ... the agents should be capable of generating and evaluating arguments for and against certain claims and they must be able to revise their beliefs as a result of the dialogues. Finally, in order to generate conditional offers, the agents should be able to do some form of hypothetical reasoning.

Communication: ...

4, Formalisation

Dialogical interaction: Communication language; Communication protocol

5, Agent Architecture

Description of the Components: User communication module; Database communication module; Agent communication language; Execution cycle module; Negotiation policy module; Argumentation system module

Negotiation Policy: ... Our negotiation policies cover two issues: the normative issue of whether accepting an offer is obligatory or forbidden, and the teleological issue whether accepting an offer violates the agent's own interests. Of course these policies can be different for the requesting and the responding agent... In the negotiation policy for a reject, the policy returns a why-reject move which starts an embedded persuasion dialogue. The specification and implementation of embedded persuasion dialogues will be the subject of future research.

Agent execution cycle: The agent execution cycle processes messages and triggers other modules during the selection of the appropriate dialogue moves. First, the speech act, locution and content are parsed from the incoming message, then depending on the locution (offer, accept, withdraw or reject) the next steps are taken... The execution cycle can be represented in Java pseudo-code...

6, Illustration of the Agent Architecture

Knowledge base: Knowledge is represented in the prolog-like syntax of the ASPIC tool...

Dialogue from example 2: ...

7, Conclusion

Wednesday, 21 November 2007

36, Towards a multi-agent system for regulated information exchange in crime investigations

Notes taken from 'Towrds a multi-agent system for regulated information exchange in crime investigations' (2006), by Pieter Dijkstra, Floris Bex, Henry Prakken, Kees de Vey Mestdagh

1, Introduction

... we define dialogue policies for the individual agents, specifying their behaviour within a negotiation. Essentially, when deciding to accept or reject an offer or to make a counteroffer, an agent first reasons about the law and then about the interests that are at stake: he first determines whether it is obligatory or permitted to perform the actions specified in the offer; if permitted but not obligatory, the agent next determines whether it is in his interests to accept the offer...

2, The problem of regulated information exchange

3, Examples

4, Requirements for the multi-agent architecture

(Knowledge; Reasoning; Goals; Communication)

5, Outline of a computational architecture

Dialogical Interaction: communication language; communication protocol

The Agents: representation of knowledge and goals; reasoning engine; dialogue policies

6, Illustration of the proposed architecture

7, Conclusion

Monday, 12 November 2007

Modelling Dialogue Types

Taken from 'Dialogue Frames in Agent Communication' (1998), by Chris Reed

Clearly the various types of dialogue are not concerned with identical substrate: persuasion, inquiry and information-seeking are epistemic, negotiation is concerned with what might generally be called 'contracts', and deliberation with 'plans'. The model presented [] does not aim to restrict either the agent architecture or the underlying communication protocol to any particular formalism...

Thus the foundation of the model is a set of agents, A, each of whom have a set of beliefs, B, contracts, C, and plans, P...

... it is possible to define the set of dialogue types, where each type is a name-substrate pair,
D = {(persuade,B), (negotiate,C), (inquire,B), (deliberate,P), (infoseek,B)}
From this matrix, a dialogue frame is defined as a tuple with four elements...

A dialogue frame is thus of a particular type, t, and focused on a particular topic, tau, - a persuasion dialogue will be focused on a particular belief, a negotiation on a contract, a deliberation on a plan, and so on. A dialogical frame is initiated by a propose-accept sequence, and terminates with a characteristic utterance indicating acceptance or concession to the topic on the part of one of the agents...

35.3-7, BDI Agents: From Theory to Practice

Notes taken from 'BDI Agents: From Theory to Practive' (1995), by Anand S. Rao and Michael P. Georgeff

3, Decision Trees to Possible Worlds

4, BDI Logics

The above transformation [of Section 3] provides the basis for developing a logical theory for deliberation by agents that is compatible with quantitative decision theory in those cases where we have good estimates for probabilities and payoffs. However, it does not address the case in which we do not have such estimates, nor does it address the dynamic aspects of deliberation, particularly those concerning commitment to previous decisions.

We begin by abstracting the model given above to reduce probabilities and payoffs to dichotomous (0-1) values. That is, we consider propositions to be either believed or not believed, desired or not desired, and intended or not intended, rather than ascribing continuous measures to them. Within such a framework, we first look at the static properties we would want of BDI systems and next their dynamic properties...

Static Constraints: The static relationships among the belief-, desire-, and intention-accessible worlds can be examined along two different dimensions, one with respect to the sets of possible worlds and the other with respect to the structure of the possible worlds...

Dynamic Constraints: As discussed earlier, an important aspect of a BDI architecture is the notion of commitment to previous decisions. A commitment embodies the balance between the reactivity and goal-directedness of an agent-oriented system. In a continuously changing environment, commitment lends a certan sense of stability to the reasoning process of an agent. This results in savings in computational effort and hence better overall performance.

A commitment usually has two parts to it: one is the condition that the agent is committed to maintain, called the commitment condition, and the second is the condition under which the agent gives up the commitment, called the termination condition. As the agent has no direct control over its beliefs and desires, there is no way that it can adopt or effectively realize a commitment strategy over these attitudes. Thus we restrict the commitment condition to intentions...

5, Abstract Architecture

6, Applications

7, Comparison and Conclusion

... While the earlier formalisms present a particular set of semantic constraints or axioms as being the formalization of a BDI agent, we adopt the view that one should be able to choose an appropriate BDI system for an application based on the rational behaviours required for that application. As a result, following the modal logic tradition, we have discussed how one can categorize different combinations of interactions between beliefs, desires, and intentions...

35.1-2, BDI Agents: From Theory to Practice

Notes taken from 'BDI Agents: From Theory to Practive' (1995), by Anand S. Rao and Michael P. Georgeff

1, Introduction

... A number of different approaches have emerged as candidates for the study of agent-oriented systems [] One such architecture views the system as a rational agent having certain mental attitudes of Belief, Desire and Intention (BDI), representing, respectively, the information, motivational, and deliberative states of the agent. These mental attitudes determine the system's behaviour and are critical for achieving adequate or optimal performance when deliberation is subject to resource bounds...

2, The System and its Environment

... First [] it is essential that the sytem have information on the state of the environment. But as this cannot necessarily be determined in one sensing action [] it is necessary that there be some component of system state that represents this information and which is updated after each sensing action. We call such a component the system's beliefs... Thus, beliefs can be viewed as the informative component of system state.

Second, it is necessary that the system also have information about the objectives to be accomplished or, more generally, what priorities or payoffs are associated with the various current objectives []... We call this component the system's desires, which can be thought of as representing the motivational state of the system.

... We seem caught on the horns of a dilemma: reconsidering the choice of action at each step is potentially too expensive and the chosen action possibly invalid, whereas unconditional commitment to the chosen course of action can result in the system failing to achieve its objectives. However, assuming that potentially significant changes can be determined instantaneously, it is possible to limit the frequency of reconsideration and thus achieve an appropriate balance between too much reconsideration and not enough []. For this to work, it is necessary to include a component of system state to represent the currently chosen course of action; that is, the output of the most recent call to the selection function. We call this addtional state component the system's intentions. In essence, the intentions of the system capture the deliberative component of the system.

Friday, 9 November 2007

Agent's Goals

Taken from 'On the Generation of Bipolar Goals in Argumentation-Based Negotiation' (2005), by Leila Amgoud and Souhila Kaci

Typology of Goals

Recent studies on psychology claim that goals are bipolar and there are at least two kinds of goals: the positive goals representing what the agent wants to achieve and the negative goals representing what the agent rejects.

Beware that, positive goals do not just mirror what is not rejected since a goal which is not rejected is not necessarily pursued. This category of goals which are neither negative nor positive are said to be in abeyance.

Note however that positive and negative goals are related by a coherence condition which says that what is pursued should be among what is not rejected.

The Origins of Goals

Agent's goals come generally from two different sources:
- from beliefs that justify their existence. So, the agent believes that the world is in a state that warrants the existence of its goals. These goals are called the initial ones or also conditional goals. They are conditional because they depend on the beliefs.
- an agent can adopt a goal because it allows him to achieve an initial goal. These are called sub-goals or adopted goals.

A conditional rule is an expression of the form
R: c1 & ... & cn => g,
which expresses the fact that if c1 ... cn are true then the agent will have the goal g.

A planning rule is an expression of the form
P: g1 & ... & gn |-> g,
which means that the agent believes that if he realizes g1, ..., gn then he will be able to achieve g.

Sunday, 4 November 2007

Distinguishing Agents

Quotes taken from 'Agent Technology for e-Commerce' (2007), Maria Fasli

A paradigm shift (page 5):

"... What distinguishes agents from other pieces of software is that computation is not simply calculation, but delegation and interaction; users do not act upon agents as they do with other software programs, but they delegate tasks to them and interact with them in a conversational rather than in a command mode. Intrinsically, agents enable the transition from simple static algorithmic-based computation to dynamic interactive delegation-based service-oriented computation..."

The novelty in agents (page 8):

"So what is it that makes agents different, over and beyond other software? Whereas traditional software applications need to be told explicitly what it is that they need to accomplish and the exact steps that they have to perform, agents need to be told what the goal is but not how to achieve it. Then, being 'smart', they will actively seek ways to satisfy this goal, acting with the minimum intervention from the user. Agents will figure out what needs to be done to ahieve the delegated goal, but also react to any changes in the environment as they occur, which may affect their plans and goal accomplishment, and then subsequently modify their course of action..."