Saturday 29 December 2007

European Workshop on Multi-Agent Systems

Quick update to say that I participated in the Fifth European Workshop on Multi-Agent Systems (EUMAS 07) earlier this month, which took place in Hammamet, Tunisia.

A great first-time experience, every step of the way: from writing the paper (Bilateral Agent Negotiation with Information-Seeking) and getting it reviewed to presenting and discussing it with those unfamiliar with my work. To add, it was really beneficial listening in and getting a feel for other ongoing research in the field of Multi-Agent Systems, especially that which is still in the early preliminary stages, like mine.

Tuesday 4 December 2007

38, Agent Technology for e-Commerce

Contents of 'Agent Technology for e-Commerce' (2007), Maria Fasli

1, Introduction

(A paradigm shift; Electronic commerce; Agents and e-commerce)

2, Software Agents

(Characteristics; Agents as intentional systems; Making decisions; Planning; Learning; Architectures)

3, Multi-agent Systems

(Interaction; Agent communication; Ontologies; Cooperative problem-solving)

4, Shopping Agents

5, Middle Agents

6, Recommender Systems

7, Elements of Strategic Interaction

(Economics; Game Theory)

8, Negotiation I

(Protocols; Auctions)

9, Negotiation II

(Bargaining; Coalitions; Social choice problems; Argumentation)

10, Mechanism Design

11, Mobile Agents

12, Trust, Security and Legal Issues

(Trust; Electronic institutions; Reputation systems; Security; Cryptography)

Monday 26 November 2007

Intelligent Design

Source: The Economist, October 20th 2007
Section: Economics Focus
Title: Intelligent Design
Subtitle: A theory of an intelligently guided invisible hand wins the Nobel prize

... despite its dreary name, mechanism design is a hugely important area of economics, and underpins much of what dismal scientists do today. It goes to the heart of one of the biggest challenges in economics: how to arrange our economic interactions so that, when everyone behaves in a self-interested manner, the result is something we all like. The word "mechanism" refers to the institutions and the rules of the game that govern our economic activities...

Mechanism-design theory aims to give the invisible hand a helping hand, in particular by focusing on how to minimise the economic cost of "asymmetric information" - the problem of dealing with someone who knows more than you do...

His [Mr Hurwicz's] big idea was "incentive compatibility". The way to get as close as possible to the most efficient outcome is to design mechanisms in which everybody does best for themselves by sharing truthfully whatever private information they have that is asked for...

37, An implementation of norm-based agent negotiation

Notes taken from 'An implementation of norm-based agent negotiation' (2007), by Peter Dijkstra, Henry Prakken, Kees de Vey Mestdagh

1, Introduction

2, The Problem of Regulated Information Exchange

3, Requirements for the Multi-Agent Architecture

Knowledge: In order to regulate distributed information exchange, agents must have knowledge of the relevant regulations and the local interpretations of those regulations, their goals and the likely consequences of their actions...

Reasoning: ... the agents should be capable of generating and evaluating arguments for and against certain claims and they must be able to revise their beliefs as a result of the dialogues. Finally, in order to generate conditional offers, the agents should be able to do some form of hypothetical reasoning.

Communication: ...

4, Formalisation

Dialogical interaction: Communication language; Communication protocol

5, Agent Architecture

Description of the Components: User communication module; Database communication module; Agent communication language; Execution cycle module; Negotiation policy module; Argumentation system module

Negotiation Policy: ... Our negotiation policies cover two issues: the normative issue of whether accepting an offer is obligatory or forbidden, and the teleological issue whether accepting an offer violates the agent's own interests. Of course these policies can be different for the requesting and the responding agent... In the negotiation policy for a reject, the policy returns a why-reject move which starts an embedded persuasion dialogue. The specification and implementation of embedded persuasion dialogues will be the subject of future research.

Agent execution cycle: The agent execution cycle processes messages and triggers other modules during the selection of the appropriate dialogue moves. First, the speech act, locution and content are parsed from the incoming message, then depending on the locution (offer, accept, withdraw or reject) the next steps are taken... The execution cycle can be represented in Java pseudo-code...

6, Illustration of the Agent Architecture

Knowledge base: Knowledge is represented in the prolog-like syntax of the ASPIC tool...

Dialogue from example 2: ...

7, Conclusion

Wednesday 21 November 2007

36, Towards a multi-agent system for regulated information exchange in crime investigations

Notes taken from 'Towrds a multi-agent system for regulated information exchange in crime investigations' (2006), by Pieter Dijkstra, Floris Bex, Henry Prakken, Kees de Vey Mestdagh

1, Introduction

... we define dialogue policies for the individual agents, specifying their behaviour within a negotiation. Essentially, when deciding to accept or reject an offer or to make a counteroffer, an agent first reasons about the law and then about the interests that are at stake: he first determines whether it is obligatory or permitted to perform the actions specified in the offer; if permitted but not obligatory, the agent next determines whether it is in his interests to accept the offer...

2, The problem of regulated information exchange

3, Examples

4, Requirements for the multi-agent architecture

(Knowledge; Reasoning; Goals; Communication)

5, Outline of a computational architecture

Dialogical Interaction: communication language; communication protocol

The Agents: representation of knowledge and goals; reasoning engine; dialogue policies

6, Illustration of the proposed architecture

7, Conclusion

Monday 12 November 2007

Modelling Dialogue Types

Taken from 'Dialogue Frames in Agent Communication' (1998), by Chris Reed

Clearly the various types of dialogue are not concerned with identical substrate: persuasion, inquiry and information-seeking are epistemic, negotiation is concerned with what might generally be called 'contracts', and deliberation with 'plans'. The model presented [] does not aim to restrict either the agent architecture or the underlying communication protocol to any particular formalism...

Thus the foundation of the model is a set of agents, A, each of whom have a set of beliefs, B, contracts, C, and plans, P...

... it is possible to define the set of dialogue types, where each type is a name-substrate pair,
D = {(persuade,B), (negotiate,C), (inquire,B), (deliberate,P), (infoseek,B)}
From this matrix, a dialogue frame is defined as a tuple with four elements...

A dialogue frame is thus of a particular type, t, and focused on a particular topic, tau, - a persuasion dialogue will be focused on a particular belief, a negotiation on a contract, a deliberation on a plan, and so on. A dialogical frame is initiated by a propose-accept sequence, and terminates with a characteristic utterance indicating acceptance or concession to the topic on the part of one of the agents...

35.3-7, BDI Agents: From Theory to Practice

Notes taken from 'BDI Agents: From Theory to Practive' (1995), by Anand S. Rao and Michael P. Georgeff

3, Decision Trees to Possible Worlds

4, BDI Logics

The above transformation [of Section 3] provides the basis for developing a logical theory for deliberation by agents that is compatible with quantitative decision theory in those cases where we have good estimates for probabilities and payoffs. However, it does not address the case in which we do not have such estimates, nor does it address the dynamic aspects of deliberation, particularly those concerning commitment to previous decisions.

We begin by abstracting the model given above to reduce probabilities and payoffs to dichotomous (0-1) values. That is, we consider propositions to be either believed or not believed, desired or not desired, and intended or not intended, rather than ascribing continuous measures to them. Within such a framework, we first look at the static properties we would want of BDI systems and next their dynamic properties...

Static Constraints: The static relationships among the belief-, desire-, and intention-accessible worlds can be examined along two different dimensions, one with respect to the sets of possible worlds and the other with respect to the structure of the possible worlds...

Dynamic Constraints: As discussed earlier, an important aspect of a BDI architecture is the notion of commitment to previous decisions. A commitment embodies the balance between the reactivity and goal-directedness of an agent-oriented system. In a continuously changing environment, commitment lends a certan sense of stability to the reasoning process of an agent. This results in savings in computational effort and hence better overall performance.

A commitment usually has two parts to it: one is the condition that the agent is committed to maintain, called the commitment condition, and the second is the condition under which the agent gives up the commitment, called the termination condition. As the agent has no direct control over its beliefs and desires, there is no way that it can adopt or effectively realize a commitment strategy over these attitudes. Thus we restrict the commitment condition to intentions...

5, Abstract Architecture

6, Applications

7, Comparison and Conclusion

... While the earlier formalisms present a particular set of semantic constraints or axioms as being the formalization of a BDI agent, we adopt the view that one should be able to choose an appropriate BDI system for an application based on the rational behaviours required for that application. As a result, following the modal logic tradition, we have discussed how one can categorize different combinations of interactions between beliefs, desires, and intentions...

35.1-2, BDI Agents: From Theory to Practice

Notes taken from 'BDI Agents: From Theory to Practive' (1995), by Anand S. Rao and Michael P. Georgeff

1, Introduction

... A number of different approaches have emerged as candidates for the study of agent-oriented systems [] One such architecture views the system as a rational agent having certain mental attitudes of Belief, Desire and Intention (BDI), representing, respectively, the information, motivational, and deliberative states of the agent. These mental attitudes determine the system's behaviour and are critical for achieving adequate or optimal performance when deliberation is subject to resource bounds...

2, The System and its Environment

... First [] it is essential that the sytem have information on the state of the environment. But as this cannot necessarily be determined in one sensing action [] it is necessary that there be some component of system state that represents this information and which is updated after each sensing action. We call such a component the system's beliefs... Thus, beliefs can be viewed as the informative component of system state.

Second, it is necessary that the system also have information about the objectives to be accomplished or, more generally, what priorities or payoffs are associated with the various current objectives []... We call this component the system's desires, which can be thought of as representing the motivational state of the system.

... We seem caught on the horns of a dilemma: reconsidering the choice of action at each step is potentially too expensive and the chosen action possibly invalid, whereas unconditional commitment to the chosen course of action can result in the system failing to achieve its objectives. However, assuming that potentially significant changes can be determined instantaneously, it is possible to limit the frequency of reconsideration and thus achieve an appropriate balance between too much reconsideration and not enough []. For this to work, it is necessary to include a component of system state to represent the currently chosen course of action; that is, the output of the most recent call to the selection function. We call this addtional state component the system's intentions. In essence, the intentions of the system capture the deliberative component of the system.

Friday 9 November 2007

Agent's Goals

Taken from 'On the Generation of Bipolar Goals in Argumentation-Based Negotiation' (2005), by Leila Amgoud and Souhila Kaci

Typology of Goals

Recent studies on psychology claim that goals are bipolar and there are at least two kinds of goals: the positive goals representing what the agent wants to achieve and the negative goals representing what the agent rejects.

Beware that, positive goals do not just mirror what is not rejected since a goal which is not rejected is not necessarily pursued. This category of goals which are neither negative nor positive are said to be in abeyance.

Note however that positive and negative goals are related by a coherence condition which says that what is pursued should be among what is not rejected.

The Origins of Goals

Agent's goals come generally from two different sources:
- from beliefs that justify their existence. So, the agent believes that the world is in a state that warrants the existence of its goals. These goals are called the initial ones or also conditional goals. They are conditional because they depend on the beliefs.
- an agent can adopt a goal because it allows him to achieve an initial goal. These are called sub-goals or adopted goals.

A conditional rule is an expression of the form
R: c1 & ... & cn => g,
which expresses the fact that if c1 ... cn are true then the agent will have the goal g.

A planning rule is an expression of the form
P: g1 & ... & gn |-> g,
which means that the agent believes that if he realizes g1, ..., gn then he will be able to achieve g.

Sunday 4 November 2007

Distinguishing Agents

Quotes taken from 'Agent Technology for e-Commerce' (2007), Maria Fasli

A paradigm shift (page 5):

"... What distinguishes agents from other pieces of software is that computation is not simply calculation, but delegation and interaction; users do not act upon agents as they do with other software programs, but they delegate tasks to them and interact with them in a conversational rather than in a command mode. Intrinsically, agents enable the transition from simple static algorithmic-based computation to dynamic interactive delegation-based service-oriented computation..."

The novelty in agents (page 8):

"So what is it that makes agents different, over and beyond other software? Whereas traditional software applications need to be told explicitly what it is that they need to accomplish and the exact steps that they have to perform, agents need to be told what the goal is but not how to achieve it. Then, being 'smart', they will actively seek ways to satisfy this goal, acting with the minimum intervention from the user. Agents will figure out what needs to be done to ahieve the delegated goal, but also react to any changes in the environment as they occur, which may affect their plans and goal accomplishment, and then subsequently modify their course of action..."

Tuesday 30 October 2007

Requirements on Commitment in Dialogue

Taken from 'Fundamentals of Critical Argumentation' (2006), by Douglas Walton

Three General Requirements on Commitment in Dialogue

1, If a proponent is committed to a set of statements, and the respondent can show that another statement follows logically as a conclusion from that set, then the respondent is committed to that conclusion.

2, The respondent has the right to retract commitment to that conclusion, but she must also retract commitment to at least one of the premises. For otherwise it has been shown that she has inconsistent commitments.

3, If one party in a dialogue can show that the other party has inconsistent commitments, then the second party must retract at least one of those commitments.

Inconsitency is generally a bad thing in logic. If a set of statements is inconsistent, they cannot all be true. At least one must be false...

34, A Verifiable Protocol for Arguing about Rejections in Negotiation

Notes taken from 'A Verifiable Protocol for Arguing about Rejections in Negotiation' (2005), by Jelle van Veenen and Henry Prakken

1, Introduction

2, Negotiation and Argumentation

Speech acts and replies in negotiation with embedded persuasion:

Negotiation

Act: request(a)
Attacks: offer(a'), withdraw
Surrenders:

Act: offer(a)
Attacks: offer(a') (a /= a'), reject(a), withdraw
Surrenders: accept(a)

Act: reject(a)
Attacks: offer(a') (a /= a'), why-reject(a), withdraw
Surrenders:

Act: accept(a)
Attacks:
Surrenders:

Act: why-reject(a)
Attacks: claim(¬a), withdraw
Surrenders:

Act: withdraw
Attacks:
Surrenders:

Persuasion

Act: claim(a)
Attacks: why(a)
Surrenders: concede(a)

Act: why(a)
Attacks: argue(A) (conc(A) = a)
Surrenders: retract(a)

Act: argue(A)
Attacks: why(a) (a is in prem(A)), argue(B) (B defeats A)
Surrenders: concede(a) (a is in prem(A) or a = conc(A))

Act: concede(a)
Attacks:
Surrenders:

Act: retract(a)
Attacks:
Surrenders:

The speech acts above show the combination of languages for negotiation and persuasion. The negotiation is extended with the why-reject locution, which allows a negotiation to shift into a persuasion subdialogue.

3, An Example

4, Conclusion

Illocutions for Persuasive Negotiation (2)

The dialogue primitives (performatives) described in 'Logic agents, dialogues and negotiation: an abductive approach' (2001) are of the form tell(a,b,Move,t) where a and b are the sending and the receiving agents, respectively, t represents the time when the primitive is uttered, and Move is a dialogue move, recursively defined as follows:

- request(give(R)) is a dialogue move, used to request a resource R;

- promise(give(R),give(R')) is a dialogue move, used to propose and to commit to exchange deals, of resource R' in exchange for resource R;

- if Move is a dialogue move, so are
--- accept(Move), refuse(Move) (used to accept/refuse a previous dialogue Move)
--- challenge(Move) (used to ask a justification for a previous Move)
--- justfiy(Move) (used to justify a past Move, by means of a Support)

There are no other dialogue moves, except the ones given above.

Monday 29 October 2007

Illocutions for Persuasive Negotiation (1)

In the paper 'A Framework for Argumentation-Based Negotiation' (1997) the authors (Carles Sierra et al) discuss three types of illocutions that serve a persuasive function in negotiation:
(i) threats — failure to accept this proposal means something negative will happen to the agent;
(ii) rewards — acceptance of this proposal means something positive will happen to the agent; and
(iii) appeals — the agent should prefer this option over that alternative for this reason.

The illocutionary acts can be divided into two sets, corresponding to negotiation particles (those used to make offers and counter offers) (offer, request, accept, reject) and corresponding to persuasive particles (those used in argumentation) (appeal, threaten, reward).

The negotiation dialogue between two agents consists of a sequence of offers and counter offers containing values for the issues. These offers and counteroffers can be just conjunctions of ‘issue = value’ pairs (offer) or can be accompanied by persuasive arguments (threaten, reward, appeal). ‘Persuasion’ is a general term covering the different illocutionary acts by which agents try to change other agent’s beliefs and goals.

appeal is a particle with a broad meaning, since there are many different types of appeal. For example, an agent can appeal to authority, to prevailing practice or to self-interest. The structure of the illocutionary act is
appeal(x,y,f,[not]a,t),
where a is the argument that agent x communicates to y in support of a formula f.

threaten and reward are simpler because they have a narrower range of interpretations. Their structure,
threaten(x,y,[not]f1,[not]f2,t)
reward(x,y,[not]f1,[not]f2,t)
is recursive since formulae f1 and f2 again may be illocutions. This recursive definition allows for a rich set of possible (illocutionary) actions supporting the persuasion.

Agents can use the illocutions according to the following negotiation protocol:
1. A negotiation always starts with a deal proposal, i.e. an offer or request. In illocutions the special constant ‘?’ may appear. This is thought of as a petition to an agent to make a detailed proposal by filling the ‘?’s with defined values.
2. This is followed by an exchange of possibly many counter proposals (that agents may reject) and many persuasive illocutions.
3. Finally, a closing illocution is uttered, i.e. an accept or withdraw.

-----

In the paper 'Arguments, Dialogue, and Negotiation' (2000) the authors (Leila Amgoud et al) present a number of moves, describe how the moves update the Commitment Stores (the update rules), give the legal next steps possible by the other agent after a particular move (the dialogue rules), and detail the way that each move integrates with the agent’s use of argumentation (the rationality rules). The moves are classified as follows:

(i) Basic Dialogue Moves (assert(p), assert(S), question(p), challenge(p));
(ii) Negotiation Moves (request(p), promise(p => q));
(iii) Responding Moves (accept(p), accept(S), accept(p => q), refuse(p), refuse(p => q)).

The authors argue that this set of moves is sufficient to capture the communication language of the above-discussed paper.

Tuesday 16 October 2007

33, Getting To Yes

Contents of 'Getting to Yes: Negotiating Agreement Without Giving In' (1992), Roger Fisher and William Ury

I - The Problem
1, Don't Bargain Over Positions

II - The Method
2, Separate the PEOPLE from the Problem
3, Focus on INTERESTS, Not Positions
4, Invent OPTIONS for Mutual Gain
5, Insist on Using Objective CRITERIA

III - Yes, But...
6, What If They Are More Powerful? (Develop Your BATNA - Best Alterative To a Negotiation Agreement)
7, What If They Won't Play? (Use Negotiation Jujitsu)
8, What If They Use Dirty Tricks? (Taming the Hard Bargainer)

IV - In Conclusion

V - Ten Questions People Ask About Getting To Yes

Saturday 13 October 2007

Breaking through the Kyoto impasse

Below is a selection of passages from an article found in the September 29th 2007 issue of 'The Economist', followed by some thoughts.

Section: Economics focus
Title: Playing games with the planet
Subtitle: A version of the "prisoner's dilemma" may suggest ways to break through the Kyoto impasse

"... all countries will enjoy the benefits of a stable climate whether they have helped to bring it about or not. So a government that can persuade others to cut their greenhouse-gas emissions without doing so itself gets the best of both worlds: it avoids all the expense and self-denial involved, and yet still escapes catastrophe...

The problem, of course, is that if everyone is counting on others to act, no one will, and the consequences could be much worse than if everyone had simply done their bit to begin with. Game theorists call a simplified version of this scenario the 'prisoner's dilemma'...

Pessimistic souls assume that the international response to climate change will go the way of the prisoner's dilemma. Rational leaders will always neglect the problem, on the grounds that others will either solve it, allowing their country to become a free-rider, or let it fester, making it a doomed cause anyway. So the world is condemned to a slow roasting, even though global warming could be averted if everyone co-operated.

Yet in a recent paper, Michael Liebreich, of New Energy Finance, a research firm, draws on game theory to reach the opposite conclusion. The dynamics of the prisoner's dilemma, he points out, change dramatically if participants know that they will be playing the game more than once. In that case, they have an incentive to co-operate, in order to avoid being punished for their misconduct by their opponent in subsequent rounds.

The paper cites a study on the subject by an American academic, Robert Axelrod, which argues that the most successful strategy when the game is repeated has three elements: first, players should start out by co-operating; second, they should deter betrayals by punishing the transgressor in the next round; and third, they should not bear grudges but instead should start co-operating with treacherous players again after meting out the appropriate punishment. The result of this strategy can be sustained co-operation rather than a cycle of recrimination.

Mr Liebreich believes that all this holds lessons for the world's climate negotiators. Treaties on climate change, after all, are not one-offs. Indeed, the United Nations is even now trying to get its members to negotiate a successor to its existing treaty, the Kyoto Protocol, which expires in 2012. Many fear that the effort will collapse unless the laggards can be persuaded to join in. But the paper argues that rational countries will not be deterred by free-riders. They will continue to curb their emissions, while devising sanctions for those who do not.

Under lock and Kyoto
The Kyoto Protocol already embodies some of these elements. Countries that do not meet their commitments, for example, are supposed to be punished with a requirement to cut their emissions more sharply the next time around. But Mr Liebreich argues that there should also be sanctions for rich countries that refuse to participate, and stronger incentives for poor countries (which are exempted from any mandatory cuts) to join in...

The global regime on climate change, Mr Liebreich believes, should also be revised more frequently, to allow the game to play itself out more quickly. So instead of stipulating big reductions in emissions, to be implemented over five years, as in Kyoto, negotiators might consider adopting annual targets. That way, co-operative governments know that they cannot be taken advantage of for long, whereas free-riders can be punished and penitents brought back into the fold more quickly.

There are flaws in the analogy, of course. In the real world, governments can communicate and form alliances, which makes the dynamics of the game much more complicated. And governments may not act consistently or rationally... most countries' willingness to act is presumably linked to the severity of global warming's ill effects. If things get bad enough, then with any luck everyone will play the game."

Why is it important to me and my research? It sure makes for an interesting agent problem: A scenario of cyclic dependency, where each agent requires something from another and there is a long-term benefit for mutual collaboration but collaborating requires short-term loss and carries the risk of deceit.

Saturday 22 September 2007

32, Reaching Agreements Through Argumentation

Notes taken from 'Reaching agreements through argumentation: a logical model and implementation' (1998), Sarit Kraus, Katia Sycara, Amir Evenchik

1, Introduction

2, The Mental Model

Classification of intentions:
- "Intend-to-do", refers to actions within the direct control of the agent.
- "Intend-that", refers to propositions not directly within the agent's realm of control, that the agent must rely on other agents for satisfying.

(The Formal Model, Syntax, Semantics)

Agent Types: Bounded Agent, An Omniscient Agent, A Knowledgeable Agent, An Unforgetful Agent, A Memoryless Agent, A Non-observer, Cooperative Agents

3, Axioms for Argumentation and for Argument Evaluation

The argument types we present (in order of decreasing strength) are:
(1) Threats to produce goal adoption or goal abandonment on the part of the persuadee.
(2) Enticing the persuadee with a promise of a future reward.
(3) Appeal to past reward.
(4) Appeal to precendents as counterexamples to convey to the persuadee a contradiction between what she/he says and past actions.
(5) Appealing to "prevailing practice" to convey to the persuadee that the proposed action will further his/her goals since it has furthered others' goals in the past.
(6) Appeal to self-interest to convince a persuadee that taking this action will enable achievement of a high-importance goal.

"... Agents with different spheres of expertise may need to negotiate with each other for the sake of requesting each others' services. Their expertise is also their bargaining power..."

(Arguments Involving Threats, Evaluation of Threats, Promise of a Future Reward, Appeal to Past Promise, Appeal to "Prevailing Practice", Appeal to Self Interest, Selecting Arguments by an Agent's Type, An Example: Labor Union vs. Management Negotiation, Contract Net Example)

4, Automated Negotiation Agent (ANA)

The general structure of an agent consists of the following main parts:
- Mental state (beliefs, desires, goals, intentions)
- Characteristics (agent type, capabilities, belief verification capabilities)
- Inference rules (mental state update, argument generation, argument selection, request evaluation)

(The Structure of an Agent and its Life Cycle, Inference Rules for Mental State Changes, Argument Production and Evaluation, Argument Selection Rules, Request Evaluation Rules, The Blocks World Environment, Simulation of a Blocks World Scenario)

5, Related Work

(Mental State, Agent Oriented Languages, Multi-agent Planning, Automated Negotiation, Defeasible Reasoning and Computational Dialectics, Game Theory's Models of Negotiation, Social Psychology)

6, Conclusions

Sunday 9 September 2007

31, Customer Coalitions in Electronic Markets

Notes taken from 'Customer Coalitions in Electronic Markets' (2001), by Maksim Tsvetovat, Katia Sycara, Yian Chen, and James Ying

"... In this paper, we report on coalition formation as a means to formation of groups of customers coming together to procure goods at a volume discount ("buying clubs") and economic incentives for creation of such group..."

1, Introduction

A coalition is a set of self-interested agents that agree to cooperate to execute a task or achieve a goal.

2, Prior Work

3, Incentives for Customer Coalitions

(Supplier Incentive to Sell Wholesale; Customer Incentive to Buy Wholesale)

4, Coalitions and Wholesale Purchasing

In the real world, a single customer rarely wants to buy large enough quantities of goods to justify wholesale purchasing... In order to lower the purchase price (and, therefore, increase utility), self-interested customer agents can join in a coalition... This would enable the coalition to buy a wholesale lot from the supplier, break it into sub-lots and distribute them to its members, thus raising the utility of each individual member.

However, the formation and administration of coalitions, as well as distribution of sub-lots has its costs... In particular, such costs include the cost of administering coalition membership, cost of collecting payments from individual memberships, and the cost of distributing items to the members when the transaction is complete. In some cases (such as distributing copies of software) some of the costs can be very small, and in other cases may rise to be prohibitively large.

5, Coalition Models

It is possible to construct a number of coalition models and protocols, all of which would have different properties and requirements. In general, all coalition models include several stages:
- Negotiation
- Coalition Formation
- Leader Election/Voting
- Payment Collection
- Execution/Distribution stage

In design of coalition protocols, the following issues have to be taken into account:
- Coalition Stability
- Distribution of Gain
- Distribution of Costs and Utility
- Distribution of Risk
- Trust and Certification

Most coalition protocols can be divided into two classes (pre-negotiation and post-negotiation), based on the order in which negotiation and coalition forming happen...

(Post-negotiation; Pre-negotiation; Distribution of Costs and Utility)

6, Customer Coalitions for Volume Discounts - An Implementation

(Coalition Protocol; Testbed Architecture; Implementation of Agents; PiPL Language)

7, Conclusions and Future Work

30, Bid Together, Buy Together

Notes taken from 'Bid Together, Buy Together: On the Efficacy of Group-Buying Business Models in Internet-based Selling' (2001), by Robert J. Kauffman and Bin Wang

"... Dynamic pricing approaches are used by many well known Internet-based firms, including firms that offer online auctions such as eBay and Amazon.com. A group-buying discount is a dynamic pricing mechanism that mimics the general approach of traditional "discount shopping clubs". Group buying pricing mechanisms permit buyers to aggregate their purchasing power and obtain lower prices than they otherwise would be able to get individually..."

1, Introduction

Model type: Group-buying models
Key concept: Enable buyers to obtain lower prices, as more people indicate a willingness to buy from the Internet-based seller's Web site. There are two variaties, involving group-buying with a fixed time period to completion of an auction, and group-buying with a fixed price that is achieved only when enough buyers participate.

... In a discussion on the recent closure of Mercata, Cook (2001) points out that the group-buying business model is too difficult for the general concumer to understand. The author also argues that the mechanics of group-buying on the Internet also prevent impulse buying, due to the lengthy periods consumers have to wait until until the end of the auction cycles that characterize group-buying market mechanisms. More importantly, he argues that the transaction volume on group-buying sites is much smaller than those of the traditional discount stores, which makes it difficult for group-buying sites to compete with retial giants such as Wal-Mart and Target...

2, The Basics of Group-Buying Models in E-Commerce

2.1, The Market Mechanisms

Pricing to match buyers and sellers is an important function of a market. In the bricks-and-mortar world, posted pricing mechanisms have been the dominant pricing strategies, where retailers display the prices they ask for the merchandise and consumers decide whether they would accept the prices or not. Under dynamic pricing mechanisms, however, buyers are no longer left with this take-it-or-leave-it decision. They can actively negotiate with the sellers to reach a satisfactory price...

2.2, The Value Proposition for Group-Buying on the Internet

The primary value proposition of group-buying business models to consumers is the lower prices they can provide, due to the buyers' collective bargaining power. By accumulating a large number of orders in a short period of time, group-buying Web sites claim they can negotiate low prices with manufacturers and suppliers, and then pass these savings on to their customers...

3, Buyer Behaviour and Market Competition in Group-Buying

(Buyer Behaviour Under the Group-Buying Market Microstructure; The Anticipation of a Price Drop; The Group-Buying Mentality; The Price Threshold Effect; Competition in the Group-Buying Market)

... The "Save-a-Spot" feature at Mobshop allowed shoppers to place conditional bids at lower prices if they were dissatisfied with the current price. The created a win-win situation for both the firm and its customers... Thus, customers did not incur the risk of buying at prices higher than they were willing to pay... And so we see that, under the group-buying market microstructure, consumers have the oppurtunity to collaborate with each other to get lower prices, instead of simply bidding against each other in auctions. Hence, consumers have the incentive to recruit other shoppers...

4, A Framework for Comparing Group-Buying Websites

Based on our survey of the group-buying websites, we identified two primary kinds of customer targeting strategies: business-to-customer (B2C) and business-to-business (B2B). The B2B category also includes the education and government procurement. We also distinguish whether a buyer or a seller initiates the auction cycle... Finally, we note that the demand aggregation approach can be determined by whether the Web site is a destination site or a site with a distributed-service model (e.g. embedded in other Web sites)...

(Mercata.com; Mobshop.com; LetsBuyIt.com; Other Group-Buying Websites)

5, Analyzing Group-Buying Business Models

Some Dimensions for Comparing Group-Buying Business Models:
- Industry Studies
- How Do Prices on Group-Buying Web Sites Compare With Other Firms' Prices?

Comparing Rivals: Mercata and Mobshop
- Pricing Strategy
- Information Endowment
- Site Features
- Product Emphasis
- Pre-Trade and Post-Trade Logistics

Discussion
(1) In which market does the group-buying model work best, B2C or B2B?
(2) How should firms that are focusing on the B2C market compete with other business models for limited customer resources?
(3) What do we learn about the composition of effective product offerings at group-buying Web sites?
(4) To what extent are group-buying Web sites at a disadvantage when it comes to the use of shopbots for comparison shopping?

6, Conclusion

Friday 7 September 2007

Group Buying

Taken from 'Performance of software agents in non-transferable payoff group buying' (2006), by Frederick Asselin and Brahim Chaib-Draa

... Group buying is a natural application domain for research on coalition formation in a multi-agent system (MAS). Consumers have an incentive to regroup with the unit price reduction as a function of the number of units bought by the group. However, as more and more consumers become members of the same group, there is an increase in the number of compromises that each consumer must make in order to agree on the product bought by the group...

Tuesday 4 September 2007

Knowledge Representation and Reasoning

"Knowledge representation and reasoning is the area of Artificial Intelligence (AI) concerned with how knowledge can be represented symbolically and manipulated in an automated way by reasoning programs. More informally, it is the part of AI that is concerned with thinking, and how thinking contributes to intelligent behaviour... in the field of knowledge representation and reasoning we focus on the knolwedge, not on the knower..."

Source: 'Knowledge Representation and Reasoning' (2004), Ronald Brachman and Hector Levesque

Monday 3 September 2007

29, Protocol Conformance for Logic-based Agents

'Protocol Conformance for Logic-based Agents' (2003), by Ulrich Endriss, Nicolas Maudet, Fariba Sadri and Francesca Toni

... In non-cooperative interactions (such as negotiation dialogues) occurring in open societies it is crucial that agents are equipped with proper means to check, and possible enforce, conformance to protocols. We identify different levels of conformance (weak, exhaustive, and robust conformance)...

1, Introduction

2, Representing Protocols

(Legality, Expected inputs, Correct responses)

3, Levels of Conformance

(Weak conformance, Exhaustive conformance, Robust conformance)

4, Logic-based Agents

(Checking conformance (Response space), Enforcing conformance, Examples)

5, Conclusion

Saturday 1 September 2007

Supply Chain Library

Taken from 'Modelling Supply Chain Dynamics: A Multiagent Approach' (1998), Jayashankar M. Swaminathan, Stephen F. Smith and Norman M. Sadeh

"We classify different elements in the supply chain library into two broad categories - Structural Elements and Control Elements. Structural elements (modeled as agents) are involved in actual production and transportation of products, and control elements help in coordinating the flow of products in an efficient manner with the use of messages. Structural elements correspond to agents and control elements correspond to the control policies in our framework. Structural and Control elements are further classified as follows:

Structural Elements
- Production (Retailer, Distributor, Manufacturer, Supplier)
- Transportation (Vehicles)

Control Elements
- Flow (Loading, Routing)
- Inventory (Centralized, Decentralized)
- Demand (Forecast, Marketing)
- Supply (Contracts)
- Information (Real-time, Periodic)

As well as the above, there is also the Customer.

Wednesday 29 August 2007

Example Dialogue (Safe Car)

Taken from 'Dialogue Games for Ontological Commitment' (2003), Robbert-Jan Beun and M. van Eijk

We give an example of a dialogue (somewhat adorned in natural language) that is generated by the rules [presented in the paper]:

A1: Is this a safe car?

A2's ontology defines the non-basic concept 'safety' in terms of 'having airbags' and 'having a good crash test'. According to this interpretation the car is indeed believed to be safe, but since A2 does not know the meaning is shared it responds... :

A2: Yes, it has air bags and a good crash test.

This response is pushed on the stack of A1. Agent A1, however, has a different view on 'safety of cars', and it manifests this discrepency by responding ... :

A1: To my opinion, a safe car would also have traction control.

Agent A2 now knows A1's interpretation of 'safety' ... and since it believes that this particular car does not have traction control it gives the following answer to the initial question ... :

A2: Mhm, if safety also amounts to having traction control then this car is not safe.

This response is pushed on the stack of A1. Agent A1 has received an acceptable answer to its question and ends the dialogue ... :

A1: OK, thank you.

Note that if in the second turn, A2 would not have manifested its interpretation of 'safety', the ontological discrepency would have remained unnoticed, possibly leading A1 to draw incorrect conclusions from the answer.

Thursday 16 August 2007

Updates versus Revisions

Taken from 'Belief Revision' (1992) (page 183), Edited by Peter Gardenfors

... we make a fundamental distinction between two kinds of modifications to a knowledge base. The first one, update, consists of bringing the knowledge base up to date when the world described by it changes...

The second type of modification, revision, is used when we are obtaining new information about a static world...

Justifications versus Coherence Models

Taken from 'Belief Revision' (1992) (page 8), Edited by Peter Gardenfors

A question that has to be answered when modelling a state of belief is whether the justifications for the beliefs should be part of the model or not. With respect to this question there are two main approaches. One is the foundations theory which holds that one should keep track of the justifications for one's beliefs: Propositions that have no justification should not be accepted as beliefs. The other is the coherence theory which holds that one need not consider the pedigree of one's beliefs. The focus is instead on the logical structure of the beliefs - what matters is how a belief coheres with the other beliefs that are accepted in the present state.

It should be obvious that the foundations and the coherence theories have very different implications for what should count as rational changes of belief systems. According to the foundations theory, belief revision should consist, first, in giving up all beliefs that no longer have a satisfactory justification and, second, in adding new beliefs that have become justified. On the other hand, according to the coherence theory, the objectives are, first, to maintain consistency in the revised epistemic state and, second, to make minimal changes of the old state that guarantee sufficient overall coherence. Thus, the two theories of belief revision are based on conflicting ideas of what constitutes rational changes of belief. The choice of underlying theory is, of course, also crucial for how a computer scientist will attack the problem of implementing a belief revision system.

-----

Taken from 'Automating Belief Revision for AgentSpeak' (2006), Natasha Alechina et al.

AGM style belief revision is sometimes referred to as coherence approach to belief revision, because it is based on the ideas of coherence and information economy. It requires that the changes to the agent's belief state caused by a revision be as small as possible. In particular, if the agent has to give up a belief in A, it does not give up believing in things for which A was the sole justification, so long as they are consistent with the remaining beliefs.

Another strand of theoretical work in belief revision is the foundational, or reason-maintenance style approach to belief revision. Reason-maintenance style belief revision is concerned with tracking dependencies between beliefs. Each belief has a set of justifications, and the reasons for holding a belief can be traced back through these justifications to a set of foundational beliefs. When a belief must be given up, sufficient foundational beliefs have to be withdrawn to render the belief underivable. Moreover, if all the justifications for a belief are withdrawn, then that belief itself should no longer be held. Most implementations of reason-maintenance style belief revision are incomplete in the logical sense, but tractable.

Three Kinds of Belief Changes

Taken from 'Belief Revision' (1992) (page 3), Edited by Peter Gardenfors

A belief revision occurs when a new piece of information that is
inconsistent with the present belief system (or database) is added to that system in such a way that the result is a new consistent belief system. But this is not the only kind of change that can occur in a belief system. Depending on how beliefs are represented and what kinds of inputs are accepted, different typologies of belief changes are possible.

In the most common case, when beliefs are represented by sentences in some code, and when a belief is either accepted or rejected in a belief system (so that no degrees of belief are considered), one can distinguish three main kinds of belief changes:

(i) Expansion: A new sentence is added to a belief system together with the logical consequences of the addition (regardless of whether the larger set so formed is consistent).

(ii) Revision: A new sentence that is inconsistent with a belief system is added, but, in order to maintain consistency in the resulting belief system, some of the old sentences are deleted.

(iii) Contraction: Some sentence in the belief system is retracted without adding any new facts. In order for the resulting system to be closed under logical consequences some other sentences must be given up.

Tuesday 31 July 2007

28, An Argumentation-based Approach for Practical Reasoning

Notes from 'An Argumentation-based Approach for Practical Reasoning' (2006), by Iyad Rahwan and Leila Amgoud

"We build on recent work on argumentation frameworks for generating desires and plans..."

1, Introduction

2, Preliminaries

(Desire-Generation Rules; Planning Rules; Agent's bases; Potential Desire)

3, Argumentation Frameworks

3.1, Arguing over beliefs

(Belief Argument; Certainty level; Comparing arguments; Conflicts between Belief Arguments; Belief Argumentation framework; Defence; Acceptable Belief Argument)

3.2, Arguing over desires

(Explanatory Argument; The force of explanatory arguments; Comparing mixed arguments; Comparing explanatory arguments; Attack among Explanatory and Belief Arguments; Argumentation framework; Defence among Explanatory and Belief Arguments; Justified desire)

3.3, Arguing over plans

(Partial plan; Instrumental Argument, or Complete Plan; Strength of Instrumental Arguments; Conflict-free sets of instrumental arguments; Acceptable Set of Instrumental Arguments; Achievable desire; Utility of Set of Instrumental Arguments; Preferred Set; Intention set)

4, Related Works

5, Conclusions

Tuesday 17 July 2007

27, On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation

Notes from 'On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation' (2007), by Iyad Rahwan et al.

1, Introduction

2, Preliminaries

(Allocation, Utility functions, Payment, Deal, Utility of a Deal for an Agent, Rational Deals for an Agent, Individual Rational Deals)

3, Bargaining Protocol

(Dialogue History, Protocol-Reachable Deal)

4, Underlying Interests

(Partial Plan, Complete Plan, Individual Capability, Individually Achievable Plans, Utility of a Plan, Utility)

5, Mutual Interests

(Committed Goals, Achievable Plans)

6, Case Study: An IBN Protocol

7, Conclusion

Tuesday 26 June 2007

26.4-6, Argument-based Negotiation among BDI Agents

Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari

4, Collaborative Agents

Collaborative MAS: A collaborative Multi-Agent System will be a pair of a set of argumentative BDI agents and a set of shared beliefs.

(Negotiating Beliefs; Proposals and Counterproposals; Side-effects; Failure in the Negotiation)

5, Communication Languages

(Interaction Protocol; Interaction Language; Negotiation Primitives)

6, Conclusions and Future Work...

26.3, Argument-based Negotiation among BDI Agents

Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari

3, Planning and Argumentation

Argumentative BDI Agent: The agents desires D will be represented by a set of literals that will also be called goals. A subset of D will represent a set of committed goals and will be referred to as the agent intentions... The agent's beliefs will be represented by a restricted Defeasible Logic Program... Besides its beliefs, desires and intentions, an agent will have a set of actions that it may use to change its world.

Action: An action A is an ordered triple (P, X, C), where P is a set of literals representing preconditions for A, X is a consistent set of literals representing consequences of executing A, and C is a set of constraints of the form not L, where L is a literal.

Applicable Action...

Action Effect...

26.1-2, Argument-based Negotiation among BDI Agents

Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari

"... Here we propose a deliberative mechanism for negotiation among BDI agents based in Argumentation."

1, Introduction

In a BDI agent, mental attitudes are used to model its cognitive capabilities. These mental attitudes include Beliefs, Desires and Intentions among others such as preferences, obligations, commitments, etc. These attitudes represent motivations of the agent and its informational and deliberative states which are used to determine its behaviour.

Agents will use a formalism based in argumentation in order to obtain plans for their goals represented by literals. They will begin by trying to construct a warrant for the goal. That might not be possible because some need literals are not available. The agent will try to obtain those missing literals, regarded as subgoals, by executing the actions it has available. When no action can achieve the subgoals the agent will request collaboration...

2, The Construction of a BDI Agent's Plan

Practical reasoning involves two fundamental processes: decide what goals are going to be pursued, and choose a plan on how to achieve them... The selected options will make up the agent's intentions; they will also have an influence on its actions, restrict future practical reasoning, and persist (in some way) in time...

... Abilities are associated with actions that have preconditions and consequences...

Friday 22 June 2007

Requesting

Taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge

Request speech acts (directives) are attempts by a speaker to modify the intentions of the hearer. However, we can identify at least two different types of requests:

- Requests to bring about some state of affairs: An example of such a request would be when one agent said "Keep the door closed." We call such requests "requests-that".

- Requests to perform some particular action: An example of such a request would be when one agent said "Lock the door." We call such requests "requests-to".

Requests-that are more general than requests-to. In the former case (requests-that), the agent communicates an intended state of affairs, but does not communicate the means to achieve this state of affairs... In the case of requesting to, however, the agent does not communicate the desired state of affairs at all. Instead, it communicates an action to be performed, and the state of affairs to be acieved lies implicit within the action that was communicated...

25.6-9, Reasoning About Rational Agents

Notes taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge

6, Collective Mental States

(Mutual Beliefs, Desires, and Intentions; Mutual Mental States and Teamwork)

7, Communication

(Speech Acts; Attempts; Informing; Requesting; Composite Speech Acts)

8, Cooperation

(What Is Cooperative Problem Solving?; Recognition; Team Formation; Plan Formation)

9, Logic and Agent Theory

(Specification; Implementation; Verification)

Thursday 21 June 2007

25.4-5, Reasoning About Rational Agents

Notes taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge

4, LORA Defined

(Syntax; Semantics; Derived Connectives; Some Properties of LORA)

5, Properties of Rational Agents

BDI Correspondence Theory

Pairwise Interactions between Beliefs, Desires and Intentions

(Int i X) => (Des i X): If an agent intends something, then it desires it. Intuitively, this schema makes sense for rational agents...

(Des i X) => (Int i X): If an agent desires something, then it intends it. In other words, an agent intends all its options... This formula does not appear to capture any interesting properties of agents.

(Bel i X) => (Des i X): This is a well-known, if not widely-admired property of agents known as realism ("accepting the inevitable"). For example, suppose I believe that the sun will definitely rise tomorrow. Then, one could argue, it makes no sense for me to desire that the sun will not rise... As a property of rational agents, realism seems too strong...

(Des i X) => (Bel i X): If an agent desires something, then it believes it. To give a concrete example, suppose I desire I am rich: should I then believe I am rich? Clearly not.

(Int i X) => (Bel i x): If an agent intends something, then it believes... Suppose I have an intention to write a book; does this imply I believe I will write it? One could argue that, in general, it is too strong a requirement for a rational agent... While I certainly believe it is possible that I will succeed in my attention to write the book, I do not believe it is inevitable that I will do so...

(Bel i X) => (Int i X): If an agent believes something, then it intends it. Again, this is a kind of realism property... Suppose that I believe that X is true: should I then adopt X as an intention? Clearly not. This would imply that I would choose and commit to everything that I believed was true. Intending something implies selecting it and committing resources to achieving it. It makes no sense to suggest committing resources to achieving something that is already true.

These formulae are a useful starting point for our analysis of the possible relationships that exist among the three components of an agent's mental state. However, it is clear that a finer-grained analysis of the relationships is likely to yield more intuitively reasonable results.

Varieties of Realism

(Int i X) => ¬(Des i ¬X)
(Des i X) => ¬(Int i ¬X)
These properties say that an agent's intentions are consistent with its desires, and conversely, its desires are consistent with its intentions... These schemas, which capture intention-desire consistency, appear to be reasonable properties to demand of rational agents in some, but not all circumstances... Under certain circumstances, it makes sense for an agent to reconsider its intentions - to deliberate over them, and possibly change focus. This implies entertaining options (desires) that are not necessarily consistent with its current intentions...

(Bel i X) => ¬(Des i ¬X)
(Des i X) => ¬(Bel i ¬X)
These schemas capture belief-desire consistency. As an example of the first, if I believe it is raining, there is no point in desiring it is not raining, since I will not be able to change what is already the case. As for the second, on first consideration, this schema seeems unreasonable. For example, I may desire to be rich while believing that I am not currently rich. But when we distinguish between present-directed and future-directed desires and beliefs, the property makes sense for rational agents...

Systems of BDI Logic

The Side-Effect Problem

The side-effect problem is illustrated by the following scenario: "Janine intends to visit the dentist in order to have a tooth pulled. She is aware that as a consequence of having a tooth pulled, she will suffer pain. Does Janine intend to suffer pain?"

... It is generally agreed that rational agents do not have to intend the consequences of their intentions. In other words, Janine can intend to have a tooth pulled, believing that this will cause pain, without intending to suffer pain.

25.1-3, Reasoning About Rational Agents

Notes taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge

1, Rational Agents

(Properties of Rational Agents, A Software Engineering Perspective, Belief-Desire-Intention Agents, Reasoning About Belief-Desire-Intention Agents, FAQ)

2, The Belief-Desire-Intention Model

(Practical Reasoning, Intentions in Practical Reasoning, Implementing Rational Agents, The Deliberation Process, Commitment Strategies, Intention Reconsideration, Mental States and Computer Programs)

3, Introduction to LORA

This logic (LORA: "Logic of Rational Agents") allows us to represent the properties of rational agents and reason about them in an unambiguous, well-defined way.

Like any logic, LORA has a syntax, a semantics, and a proof theory. The syntax of LORA defines a set of acceptable constructions known as well-formed formulaue (or just formulae). The semantics assign a precise meaning to every formula of LORA. Finally, the proof theory of LORA tells us some basic properties of the logic, and how we can establish properties of the logic.

The language of LORA combines four distinct components:

1. A first-order component, which is in essence classical first-order logic...
2. A belief-desire-intention component, which allows us to express the beliefs, desires, and intentions of agents within a system.
3. A temporal component, which allows us to represent the dynamic aspects of systems - how they vary over time.
4. An action component, which allows us to represent the actions that agents perform, and the effects of these actions.

24.12, An Introduction to Multiagent Systems

Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge

12, Logics for Multiagent Systems

(Why Modal Logic?, Possible-Worlds Semantics for Modal Logics)

Normal Modal Logics

The basic possible-worlds approach has the following disadvantages as a multiagent epistemic logic:

- agents believe all valid formulae;
- agents' beliefs are closed under logical consequence;
- equivalent propositions are identical beliefs; and
- if agents are inconsistent, then they believe everything.

Epistemic Logic for Multiagent Systems

Pro-attitudes: Goals and Desires

An obvious approach to developing a logic of goals or desires is to adapt possible-worlds semantics. In this view, each goal-accessible world represents one way the world might be if the agent's goals were realised. However, this approach falls prey to the side effect problem, in that it predicts that agents have a goal of the logical consequences of their goals (cf. the logical omniscience problem). This is not a desirable property: one might have a goal of going to the dentist, with the necessary consequence of suffering pain, without having a goal of suffering pain.

Common and Distributed knowledge

Integrated Theories of Agency

When building intelligent agents - particularly agents that must interact with humans - it is important that a rational balance is achieved between the beliefs, goals, and intentions of agents.

"For example, the following are desirable properties of intention: an autonomous agent should act on its intentions, not in spite of them; adopt intentions it believes are feasible and forego those believed to be infeasible; keep (or commit to) intentions, but not forever; discharge those intentions believed to have been satisfied; alter inentions when relevant beliefs change; and adopt subsidiary intentions during plan formation." (Cohen and Levesque, 1990)

Recall the properties of intentions, as discussed in Chapter 4.

(1) Intentions pose problems for agents, who need to determine ways of achieving them.
(2) Intentions provide a 'filter' for adopting other intentions, which must not conflict.
(3) Agents track the success of their intentions, and are inclined to try again if their attempts fail.
(4) Agents believe their intentions are possible.
(5) Agents do not believe they will not bring about their intentions.
(6) Under certain circumstances, agents believe they will bring about their intentions.
(7) Agents need not intend all the expected side effects of their intentions.

Formal Methods in Agent-Oriented Software Engineering

24.5-11, An Introduction to Multiagent Systems

Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge

5, Reactive and Hybrid Agents

6, Multiagent Interactions

(Utilities and Preferences, Multiagent Encounters, Dominant Strategies and Nash Equilibria, Competitive and Zero-Sum Interactions, The Prisoner's Dilemma, Other Symmetric 2*2 Interactions, Dependence Relations in Multiagent Systems)

7, Reaching Agreements

(Mechanism Design, Auctions, Negotiation, Argumentation)

8, Communication

(Speech Acts, Agent Communication Languages, Ontologies for Agent Communication, Coordination Languages)

9, Working Together

(Cooperative Distributed Problem Solving, Task Sharing and Result Sharing, Result Sharing, Combining Task and Result Sharing, Handling Inconsistency, Coordination, Multiagent Planning and Synchronization)

10, Methodologies

(When is an Agent-Based Solution Appropriate?, Agent-Oriented Analysis and Design Techniques, Pitfalls of Agent Development, Mobile Agents)

11, Applications

24.3-4, An Introduction to Multiagent Systems

Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge

3, Deductive Reasoning Agents

(Agents as Theorem Provers, Agent-Oriented Programming, Concurrent MetateM)

4, Practical Reasoning Agents

Practical Reasoning Equals Deliberation Plus Means-End Reasoning: Practical reasoning is reasoning directed towards actions - the process of figuring out what to do.

"Practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes." (Bratman, 1990)

Human practical reasoning appears to consist of at least two distinct activities. The first of these involves deciding what state of affairs we want to achieve (deliberation); the second process involves deciding how we want to achieve these states of affairs (means-end reasoning).

We refer to the states of affairs that an agent has chosen and committed to as its intentions.

Intentions play the following important roles in practical reasoning:

- Intentions drive means-end reasoning...
- Intentions persist...
- Intentions constrain future deliberation...
- Intentions influence beliefs upon which future practical reasoning is based...

Means-Ends Reasoning: A planner is a system that takes as input representations of the following:

(1) A goal, intention or a task. This is something that the agent wants to achieve, or a state of affairs that the agent wants to maintain or avoid.
(2) The current state of the environment - the agent's beliefs.
(3) The actions available to the agent.

(Implementing a Practical Reasoning Agent, HOMER: an Agent That Plans)

24.2, An Introduction to Multiagent Systems

Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge

2, Intelligent Agents

An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.

Environments: Russell and Norvig (1995) suggest the following classification of environment properties:

- Accessible versus inaccessible...
- Deterministic versus non-deterministic...
- Static versus dynamic...
- Discrete versus continuous...

Intelligent Agents: The following list of the kinds of capabilities that we might expect an intelligent agent to have was suggested by Wooldridge and Jennings (1995):

- Reactivity...
- Proactiveness...
- Social ability...

... What turns out to be hard is building a system that achieves an effective balance between goal-directed and reactive behaviour.

(Agents and Objects, Agents and Expert Systems, Agents as Intentional Systems, Abstract Architectures for Intelligent Agents, How to Tell an Agent What to Do, Synthesizing Agents)

24.1, An Introduction to Multiagent Systems

Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge

1, Introduction

This book is about multiagent systems. It addresses itself to two key problems:

- How do we build agents that are capable of independent, autonomous action in order to successfully carry out the tasks that we delegate to them?

- How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out the tasks that we delegate to them, particularly when the other agents cannot be assumed to share the same interests/goals?

The first problem is that of agent design, and the second problem is that of society design. The two problems are not orthogonal - for example, in order to build a society of agents that work together effectively, it may help if we give members of the society models of the other agents in it.

The Vision Thing: "You are in desperate need of a last minute holiday somewhere warm and dry. After specifying your requirements to your personal digital assistant (PDA), it converses with a number of different Web sites, which sell services such as flights, hotel rooms, and hire cars. After hard negotiation on your behalf with a range of sites, your PDA presents you with a package holiday."

There are many basic research problems that need to be solved in order to make such a scenario work; such as:

- How do you state your preferences to your agents?

- How can your agent compare different deals from different vendors?

- What algorithms can your agent use to negotiate with other agents (so as to ensure you are not 'ripped off')?

Objections to Multiagent Systems: Is it not all just distributed/concurrent systems?

In multiagent systems, there are two important twists to the concurrent systems story.

- First, because agents are assumed to be autonomous - capable of making independent decision about what to do in order to satisfy their design objectives - it is generally assumed that synchronization and coordination structures in a multiagent system are not hardwired in at design time, as they typically are in standard concurrent/distributed systems. We therefore need mechanisms that will allow agents to synchronize and coordinate their activities at run time.

- Second, the encounters that occur among computing elements in a multiagent system are economic encounters, in the sense that they are encounters between self-interested entities. In a classic distributed/concurrent system, all the computing elements are implicitly assumed to share a common goal (of making the overall system function correctly). In multiagent systems, it is assumed instead that agents are primarily concerned with their own welfare (although of course they will be acting on behalf of some user/owner).

Tuesday 12 June 2007

Backward and Forward Reasoning in Agents

The reasoning core of hybrid agents, which exhibit both rational/deliberative and reactive behaviour, is a proof procedure (executed within an observe-think-act cycle) that combines forward and backward reasoning:

Backward Reasoning: Used primarily for planning, problem solving and other deliberative activities.

Forward Reasoning: Used primarily for reactivity to the environment, possibly including other agents.

Conformance to Protocols

A protocol specifies the "rules of encounter" governing a dialogue between agents. It specifies which agent is allowed to say what in a given situation.

There are different levels of (an agent's) conformance to a protocol, as follows:

- Weak conformance - iff it will never utter an illegal dialogue move.

- Exhaustive conformance - iff it is weakly conformant and it will utter at least one dialogue move when required by the protocol.

- Robust conformance - iff it is exhaustively conformant and it utters the (special) dialogue more "not-understood" whenever it receives an illegal move from the other agent.

Deduction, Induction, Abduction

Deduction: An analytic process based on the application of the general rules to particular cases, with the inference of a result.

Induction: Synthetic reasoning which infers the rule from the case and the result.

Abduction: Another form of synthetic inference, but of the case from a rule and a result.

Friday 8 June 2007

23, Conflict-free normative agents using assumption-based argumentation

Notes taken from 'Conflict-free normative agents using assumption-based argumentation' (2007), by Dorian Gaertner and Francesca Toni

"... We (map) a form of normative BDI agents onto assumption-based argumentation. By way of this mapping we equip our agents with the capability of resolving conflicts amongst norms, belifs, desires and intentions. This conflict resolution is achieved by using the agent's preferences, represented in a variety of formats..."

1, Introduction

Normative agents that are governed by social norms may see conflicts arise amongst their individual desires, or beliefs, or intentions. These conflicts may be resolved by rendering information (such as norms, beliefs, desires and intentions) defeasible and by enforcing preferences. In turn, argumentation has proved to be a useful technique for reasoning with defeasible information and preferences when conflicts may arise.

In this paper we adopt a model for normative agents, whereby agents hold beliefs, desires and intentions, as in a conventional BDI model, but these mental attitudes are seen as contexts and the relationship amongst them are given by means of bridge rules...

2, BDI+N Agents: Preliminaries

(Background (BDI+N agents), Norm Representation in BDI+N Agents, Example)

3, Conflict Avoidance

(Background (Assumption-based argumentation framework), Naive Translation into Assumption-Based Argumentation, Avoiding Conflicts using Assumption-Based Argumentation)

4, Conflict Resolution using Preferences

(Preferences as a Total Ordering, Preferences as a Partial Ordering, Defining Dynamic Preferences via Meta-rules)

5, Conclusions

In this paper we have proposed to use assumption-based argumentation to solve conflicts that a normative agent can encounter, arising from applying conflicting norms but also due to conflicting beliefs, desires and intentions. We have employed qualitative preferences over an agent's beliefs, desires and intentions and over the norms it is subjected to in order to resolve conflicts...

Tuesday 5 June 2007

Topics of automated negotiation research

Taken from ‘Automated Negotiation: Prospects, Methods and Challenges’ (2001), by N. R. Jennings et al.

Automated negotiation research can be considered to deal with three broad topics:

- Negotiation Protocols: the set of rules that govern the interaction...

- Negotiation Objects: the range of issues over which agreement must be reached...

- Agents’ Decision Making Models: the decision making apparatus the participants employ to act in line with the negotiation protocol in order to achieve their objectives...

22, The Carneades Argumentation Framework

Notes taken from ‘The Carneades Argumentation Framework (Using Presumptions and Exceptions to Model Critical Questions)’ (2003), by Thomas F. Gordon and Douglas Walton

“We present a formal, mathematical model of argument structure and evaluation, called the Carneades Argumentation Framework… (which) uses three kinds of premises (ordinary premises, presumptions and exceptions) and information about the dialectical status of arguments (undisputed, at issue, accepted or rejected) to model critical questions in such a way to allow the burden of proof to be allocated to the proponent or the respondent, as appropriate.”

1, Introduction

The Carneades Argumentation Framework uses the device of critical questions to evaluate an argument... The evaluation of arguments in Carneades depends on the state of the dialog. Whether or not a premise of an argument holds depends on whether it is undisputed, at issue, or decided. One way to raise an issue is to ask a critical question. Also, the proof standard applicable for some issue may depend on the stage of the dialog. In a deliberation dialog, for example, a weak burden of proof would seem appropriate during brainstorming, in an early phase of the dialog...

2, Argument Structure...

3, Argument Evaluation...

4, Conclusion...

Thursday 17 May 2007

Traditionally fallacious moves in negotiation

How could the traditionally fallacious moves (as defined for persuasion dialogues) be used (non-fallaciously) for negotiation? Some examples of the kinds of fallacies meant:
- Appeal to force (argumentum ad baculum)
- Appeal to pity
- Playing on popular sentiments (argumentum ad populum)
- Attacking someone's position by raising questions about the person's character or personal situation (argumentum ad hominem)
- Alleging practical inconsistencies between a person and his circumstances (circumstantial ad hominem)
- Pointing out bias in the point of view of the other party (or "poisoning the well" variant)

Tuesday 15 May 2007

21, Commitment in Dialogue

Notes taken from 'Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning' (1995), by Douglas N. Walton and Erik C. W. Krabbe

0, Introduction

1, The Anatomy of Commitment

(Action Commitment, Propositional Commitment)

2, The Dynamics of Commitment

(Incurring of commitment, Loss of commitment, Relations between commitments, Clashing commitments and inconsistency)

3, Dialogues: Types, Goals, and Shifts

(Types and goals of dialogue, Complex dialogue, Dialectical shifts, Illicit shifts and fallacies)

4, Systems of Dialogue Rules

(Tightening up and dark-side commitment, permissive persuasion dialogue, Rigorous persuasion dialogue, Complex persuasion dialogue)

5, Conclusions and Prospects

Friday 11 May 2007

20, Argumentation Schemes for Presumptive Reasoning

Notes taken from 'Argumentation Schemes for Presumptive Reasoning' (1995), by Douglas N. Walton

1, Introduction

In accepting the (presumptive) premises, the participants are bound to tentatively accept the conclusion, for the sake of argument or discussion, unless some definite evidence comes that is sufficient to indicate rejecting it.

Such presumtively based arguments can be very useful and important in cases where action must be taken, but firm evidence is not presently available.

Practical reasoning is a kind of goal-directed, knowledge-based reasoning that is directed to choosing a prudent course of action for an agent that is aware of its present circumstances. These circumstances can change, and practical reasoning is therefore to be understood as a dynamic kind of reasoning that needs to be corrected or updated as new information comes in.

2, Presumptive Reasoning

We need to distinguish between ``concessions'' and ``substantive commitments''. A substantive commitment is a proposition that a participant in dialogue is obliged to defend, or retract, if challenged by the other party to give reasons to support it. In a word, it has a burden of proof attached to it. This is the type of commitment to a proposition that goes along with having asserted it in a dialogue. A concession is a commitment where there is no such obligation to defend, if challenged. Concessions are assumptions agreed to ``for the sake of argument''. By nature, they are temporary, and do not necessarily represent an arguer's position in a dialogue.

We note the difference between pure supposition and assertion as kinds of speech acts. Assertion always carries with it a burden of proof, becuase assertion implies substantive commitment to the proposition asserted. Supposition (or assumption) however, requires only the agreement of the respondent, and carries with it no burden of proof on either side. Presumption, as a speech act, is halfway between mere supposition and assertion. Presumption essentially means that the proponent of the proposition in question does not have a burden of proof, only a burden to disprove contrary evidence, should it arise in the future sequence of dialogue. The burden here has three important characteristics - it is a future, conditional, and negative burden of proof. It could perhaps be called a burden to rebut, in approriate circumstances.

Presumption is functionally opposed to burden to proof, meaning that presumption removes or absolves one side from the burden, and shifts the burden to the other side.

Presumption is understood as a kind of speech act that is halfway between assertion and mere assumption. An assertion normally carries with itself in argument a burden of proof: ``He who asserts must prove!'' By contrast, if a participant in argumentation puts forward a mere assumption, he or she (or anyone in the dialogue) is free to retract it at any subsequent point in the dialogue without having having to give evidence or reasons that would refute it. Assumptions are freely undertaken and can be freely rejected in a dialogue.

In order to be useful, presumptions must have a certain amount of ``sticking power'', but by their nature, they are tentative and subject to later retraction.

For example, in a potentially hazardous situation, it may be prudentially wise to tilt the burden of proof in the direction of safety. The maxim is to ``err on the side of safety'', where doubt creates the potential for danger.

A simple case is the accepted procedure for handling weapons on a firing range. The principle is always to assume a weapon is loaded, unless you are sure that it is not loaded. The test of whether you are sure of this is that you have, just before, inspected the chamber and perceived clearly that it is empty.

The same kind of example shows also, however, how tied to the specifics of a context or situation this kind of reasoning is. Suppose you are a soldier in wartime getting ready to defend your position against an imminent enemy assualt. Here, reasoning again on practical grounds of safety or self-preservation, you act on a presumption that your weapon may be empty, by checking to see that it is not empty.

Customs, fashions, and popularly accepted ways of doing things, are another important source of presumptions. With many choices on how to do things in life, in the absence of knowledge that one way of doing something is any better or more harmful than another, people often tend to act on the presumption that the way to do something is the popularly accepted way of doing it.

3, The Argumentation Schemes

Walton describes and analyses 25 different argumentation schemes. For each argumentation scheme, a matching set of critical questions is given. This pairing brings out the essentially presumptive nature of the kind of reasoning involved in the use of argumentation schemes, and at the same time reveals the pragmatic and dialectical nature of how this reasoning works. The function of each argumentation scheme is to shift a weight of presumption from one side of a dialogue to the other. The opposing arguer in the dialogue can shift this weight of presumption back to the other side again by asking any of the appropriate critical questions matching that argumentation scheme. To once again get the presumption on his or her side, the original arguer (who used the argumentation scheme in the first place) must give a satisfactory answer to that critical question.

Some of the argumentation schemes are basic or fundamental, whereas others are composites made up from these basic schemes.

4, Argument from Ignorance

The arguments associated with these argumentation schemes are typically used in a balance of considerations type of case, where knowledge or hard information is lacking, of a kind that would enable the problem to be resolved or the dispute to be settled on that basis. In other words, these presumption-based arguments are generally arguments from ignorance. The logic of these arguments could be expressed by the phrase, ``I don't know that this proposition is false, so until evidence comes in to refute it, I am entitled to provisionally assume that it is true.'' All of the argumentation schemes previously studied tend to take this general form.

In some cases, the argument from ignorance is a correct (nonfallacious) argument because we can rightly assume that our knowledge base is complete. If some proposition is not known to be in it, we can infer that this proposition must be false.

5, Ignoring Qualifications

6, Argument from Consequences

The argument from consequences may be broadly characterised as the argument for accepting the truth (or falsehood) of a proposition by citing the consequences of accepting (rejecting) that proposition

Wednesday 9 May 2007

Concessions 'for the sake of argument'

When an agent A argues with another agent B that has its own (different) knowledge-base (i.e. different beliefs, desires, intentions, values, preferences etc) then the dialogue proceeds on the basis of publicly agreed matter. This publicly agreed matter are concessions taken on by both parties for the sake of progression of the dialogue/argument. Otherwise, if A presents an argument to B with premises not shared by B it may not be accepted.

Tips for a better talk (and slides)

Have a running example. Motivate (start) the talk with the example. I (the listener) need to know where you're taking me otherwise I'll tune off, and I need to know that you are not solving a problem that doesn't exist.

Diagrams and pictures often go down well. Use arrows, boxes etc. To show the structural overview and to compactly/nicely/visually bring everything together.

Contextual Commitments

Should it be possible for an agent to make a commitment in a given context, and then make a commitment in another context that would be contradictory to the first if the context is not considered? Any examples?

Wednesday 25 April 2007

Practical Reasoning

Taken from Chapter 4 of 'Persuasion in Practical Argument Using Value-based Argumentation Frameworks' (2003) Trevor Bench-Capon

In practical reasoning an argument often has the following form:

Action A should be performed in circumstances C, because the performance of A in C would promote some good G.

This kind of argument can be attacked in a number of ways:
- It may be that circumstances C do not obtain; or it may be that performing A in C would not promote good G. These are similar to the way in which a factual argument can be attacked in virtue of the falsity of a premise, or because the conclusion does not follow from the premise.
- Alternatively it can be attacked because performing some action B, which would exclude A, would also promote G in C. This is like an attack using an argument with a contradictory conclusion.
- However, a practical argument like the one above can be attacked in two additional ways: It may be that G is not accepted as a good worthy of promotion, or that performing action B, which would exclude performing A, would promote a good H in C, and good H is considered more desirable than G. The first of these new attacks concerns the ends to be considered, and the second the relative weight to be given to the ends...

Tuesday 24 April 2007

19, Assumption-based argumentation for epistemic and practical reasoning

Notes taken from 'Assumption-based argumentation for epistemic and practical reasoning' (2007), by Francesca Toni

"Assumption-based argumentation can serve as an effective computational tool for argumentation-based epistemic and practical reasoning, as required in a number of applications. In this paper we substantiate this claim by presenting formal mappings from frameworks for epistemic and practical reasoning onto assumption-based argumentation frameworks..."

1, Introduction

... In this paper, we consider two forms of reasoning that rational agents may need to perform, namely reasoning as to which beliefs they should hold (epistemic) and reasoning as to which course of action/decision they should choose (practical)...

2, Abstract and assumption-based argumentation...

3, Epistemic Reasoning...

3.1, Epistemic frameworks without preference rules...

3.2, Epistemic frameworks with preference rules...

4, Practical reasoning...

5, Example...

6, Conclusions

We have proposed concrete instances of assumption-based argumentation for epistemic reasoning... and practical reasoning...

... Within the ARGUGRID project, our approach to (epistemic and) practical reasoning can be used to model decisions concerning the orchestration of services available over the grid, taking into account preferences by the users and/or the service providers...

Monday 23 April 2007

The Big Question

After a discussion with fellow PhD students and after spending the last two months bogged down with nitty gritty details about argumentation structure and semantics, it is time to beg the question, what is the big question?

Drawing inspiration from 'Getting to Yes: Negotiating Agreement Without Giving In', the big question will stem from this one, "What is the best way for agents to deal with their differences?"

So that's what the next few weeks will be dedicated to, defining the big question.

Sunday 15 April 2007

18, A Semantic Web Primer

Summary of ‘A Semantic Web Primer’ by Grigoris Antoniou and Frank van Harmelen (2004)

1, The Semantic Web Vision

- The Semantic is an initiative that aims at improving the current state of the World Wide Web.
- The key idea is the use of machine-processable Web information.
- Key technologies include explicit metadata, ontologies, logic and inferencing, and intelligent agents.
- The development of the Semantic Web proceeds in layers.

2, Structured Web Documents in XML

- XML is a metalanguage that allows users to define markup for their documents using tags.
- Nesting of tags introduces structure. The structure of documents can be enforced schemas or DTDs.
- XML separates content and structure from formatting.
- XML is the de facto standard for the representation of structured information on the Web and supports machine processing of information.
- XML supports the exchange of structured information across different applications through markup, structure, and transformations.
- XML is supported by query languages.

Some points discussed in subsequent chapters include:
- The nesting of tags does not have standard meaning.
- The semantics of XML documents is not accessible to machines, only to people.
- Collaboration and exchange are supported if there is an underlying shared understanding of the vocabulary. XML is well-suited for close collaboration, where domain- or community-based vocabularies are used. It is not so well suited for global communication.

3, Describing Web Resources in RDF

- RDF provides a foundation for representing and processing metadata.
- RDF has a graph-based data model. Its key concepts are resource, property, and statement. A statement is a resource-property-value triple.
- RDF has an XML-based syntax to support syntactic interoperability. XML and RDF complement each other because RDF supports semantic interoperability.
- RDF has a decentralised philosophy and allows incremental building of knowledge, and its sharing and reuse.
- RDF is domain-independent. RDF Schema provides a mechanism for describing specific domains.
- RDF Schema is a primitive ontology language. It offers certain modelling primitives with fixed meaning. Key concepts of RDF Schema are class, subclass relations, property, subproperty relations, and domain and range restrictions.
- There exist query languages for RDF and RDFS.

Some points that will be discussed in the next chapter:
- RDF Schema is quite primitive as a modelling language for the Web. Many desirable modelling primitives are missing.
- Therefore we need an ontology layer on top of RDF/RDFS.

4, Web Ontology Language: OWL

- OWL is the proposed standard for Web ontologies. It allows us to describe the semantics of knowledge in a machine-accessible way.
- OWL builds upon RDF and RDF Schema: (XML-based) RDF syntax is used; instances are defined using RDF descriptions; and most RDFS modelling primitives are used.
- Formal semantics and reasoning support is provided through the mapping of OWL on logics. Predicate logic and description logics have been used for this purpose.

While OWL is sufficiently rich to be used in practice, extensions are in the making. They will provide further logical features, including rules.

5, Logic and Inference: Rules

- Horn logic is a subset of predicate logic that allows efficient reasoning. It forms a subset orthogonal to description logics.
- Horn logic is the basis of monotonic rules.
- Non-monotonic rules are useful in situations where the available information is incomplete. They are rules that may be overridden by contrary evidence (other rules).
- Priorities are used to resolve some conflicts between non-monotonic rules.
- The representation of rules in XML-like languages is straightforward.