Showing posts with label negotiation. Show all posts
Showing posts with label negotiation. Show all posts

Friday, 23 July 2010

74, Argumentative Alternating Offers

Good paper ('Argumentative Alternating Offers', Nabila Hadidi, Yannis Dimopolous, Pavlos Moraitis, 2010). Understandable. Always a good positive feeling understanding a paper!

Paper is about introducing argumentation to the "alternating offers (negotiation) protocol" and separating (making a distinction) between "practical" and "epistemic" arguments. Worth nothing that the work is for the 2-agent setting and arguments are treated as abstract entities.

A few questions to ask of the author(s):
  • Are (would) the conflict relations (Re and Rp) (be) shared by both agents? (see page 442)
  • Are (would) the preference relations (>=p and >=e) (be) shared by both agents? (see page 442)
  • Why is the assumption on page 443 that all practical arguments are 'useful' for some offer necessary?
  • The 'reject' case on page 445 (and explained on page 447): Why is it so? What does it mean for arguments and offers to be removed from the agent's theory?
  • Are offers ever added to the agents known offers (i.e. is it dynamic or static)?

Tuesday, 2 March 2010

68-69, Coloured Trails (CT) game

Looked through a couple of papers describing and running experiments using the Coloured Trails (CT) game for multi-agent resource negotiation:

68, The Influence of Social Dependencies on Decision-Making: Initial Investigations with a New Game, by Barbara J. Grosz, Sarit Kraus & Shavit Talman, 2004.

69, The Effects of Goal Revelation on Computer-Mediated Negotiation, by Ya'akov Gal et al, 2009.

Implementation referenced in [69] can be found here:
http://www.eecs.harvard.edu/ai/ct

Wednesday, 23 September 2009

62, Agents that reason and negotiate by arguing

A seminal piece of work ('Agents that reason and negotiate by arguing', 1998, Simon Parsons, Carles Sierra, Nick R Jennings). Finally went through it after 3 years! Need to compare it with my (A)ABA-based ABN framework.

Nice examples of 'negotiation' dialogues (proposal, critique, counter-proposal, explanation) in Section 2.1. Would be nicer if they can be *generated*.

Can't see how the stuff in Sections 3-5 (agent architecture etc) links to the negotiation protocol in Section 2.2.

No concept of 'assumptions' or the ability for agents to reason (make decisions/utterances) despite incomplete information. We allow for this.

No implementation, though it is claimed there is a clear link between the formal (agent architecture) model and its practical instantiation. We support our framework with an implementation.

The framework is based on an ad hoc system of argumentation. Arguments can be classified into rough classes of acceptability, but this is not enough to determine the acceptability of arguments. Also, only inconsistency *between* agents is considered; inconsistency that arises within an agent is not considered/handled. We base our framework on a general argumentation system (AABA) for which the argument acceptability semantics are clearly defined.

This is what I intend to include in the Related Work section of my forthcoming "argmas09paper":
In [61] a negotiation language and protocol is presented that allows for the exchange of complex proposals which can include compelling arguments for why a proposal should be adopted. Whilst [61] does not concentrate on the way in which arguments are built and analysed, the work is extended in [62] by indicating how argumentation can be used to construct proposals, create critiques, provide explanations and meta-information. However, even in [62], further expansion is required for agents to be able to generate and rate arguments, and for any kind of implementation to be produced. In particular, the acceptability classes used in [62] to rank arguments are not sufficient to resolve inconsistencies that may arise within and between agents. A more fine-grained mechanism is required. We use an existing argumentation framework (AABA) for this purpose, that is able to build and determine the acceptability of arguments, even as the knowledge bases of agents change over time (as a result of the dialogues). The AABA framework also allows agents to make assumptions, enabling agents to make decisions even despite incomplete information. Lastly, we supplement our formal model with an implementation.

Friday, 18 September 2009

61, A framework for argumentation-based negotiation

Old paper ('A framework for argumentation-based negotiation', 1998, Carles Sierra et al) but some really good ideas for using negotiation (offer, request, accept, reject and withdraw acts) with persuasion (appeal, threaten and reward acts). However, like most other papers, not fully worked out / generative.

To its advantage, the framework is for multi- (i.e. more than two) agent settings: "Deals are always between two agents, though an agent may be engaged simultaneously in negotiation with many agents for a given deal."

The 'attacks' relationship between "argument pairs" (i.e. argument Arg supporting a formula p) is assumed to be a primitive notion, though however argument pairs are not assumed to be primitive notions. Defining such an 'attacks' relationship could get messy!

An authority relation (between agent roles) is used as the mechanism for comparing arguments, i.e. who puts forward an argument is as important (maybe more so) than what is said. Potentially this doesn't quite make sense in an argument evaluation sense - depends what is meant by "argument". See last paragraph of Section 4.1.1 of 'Argumentation-Based Negotiation' (1998).

Wednesday, 16 September 2009

60, Dialogue games that agents play within a society

Went through this journal paper ('Dialogue games that agents play within a society', 2009, Nishan C. Karunatillake et al) and the accompanying technical report ('Formal Semantics of ABN Framework', 2008, Nishan C. Karunatillake et al) following going through the main author's thesis. Questions similar to the thesis (see 59). In addition, this is what I plan on including in my forthcoming (argumentation-based negotiation social optimality) paper...

"... The argument-based negotiation framework of [60] is supplemented with a number of concrete negotiation strategies which allow agents to exchange arguments as part of the negotiation process. The example scenario/context considered allows for multiple (more than two) agents. However, contrary to our approach, the semantics of arguments is not considered. Instead, the focus is on using argumentation as a metaphor for characterising communication among agents. Also, deals involving more than two agents are not possible, as is required in our resource allocation setting in order to reach optimal allocations. ..."

Tuesday, 15 September 2009

59, Argumentation-Based Negotiation in a Social Context

This ('Argumentation-Based Negotiation in a Social Context', Nishan C. Karunatillake, 2006) is the third thesis I have gone through now. I really liked, and read fully, the first three chapters. Quite a few question marks penned when going through the protocol and operational semantics in Chapter 3 - questions regarding the method of arguing (challenging and asserting). Apparantly these questions are addressed in a later journal paper and technical report, which I will go through now. It is worth noting that the negotiation/argumentation strategies defined in later chapters are far (in my opinion) from using the full capacity of the framework defined in Chapter 3.

I really liked the scenario presented in Chapter 4.1. Very good - lots of scope for play/conflicts/etc. Though however, the system model following this in Chapter 4.2 becomes very mathematics/number-based. The "argue" method doesn't really argue. It is quite similar to my "argmas09" paper - the responding agent provides a reason for rejection which the proposing agent incorporates into its knowledge-base for future proposals.

The strategies defined in Chapter 5.1 for the experimentation (empirical analysis) I thought were rather random - seems to be no justification at all for these strategies over any other. I couldn't quite see the general applicability of the results presented in Chapter 5.3 beyond the specific application setting of this paper. I skipped/glossed over to the summary.

Chapter 6 proceeds by simplifying the experimental scenario defined in Chapter 4.1 for the argumentation strategies to be defined in this chapter. Really not clear what the defeat-status computation (to determine the validity of a claim/premise) used in these strategies is. Also, I couldn't quite work out how the argumentation was done in the strategies - not clear what is being challenged/asserted. This chapter could have done with some examples.

Friday, 4 September 2009

Karunatillake's Thesis - Chapter 3

My first set of questions sent to Nishan C. Karunatillake regarding chapter 3 of his thesis:
Thanks for the response.

Sorry if my questions are really technical/low-level. It's just that I am developing a model for multi-agent argumentative negotiation, I came across your work and I am trying to see if my model could map to your language, protocol etc. I have not read beyond Chapter 3 yet, so I apologise if some of my questions are answered later. Please let me know if that is the case.

I'll try ask only a few questions at a time, as occurred to me whilst working sequentially through the chapter, so as not to bombard you and in case questions that occurred later become clear.

Here goes...

- Looking at the 'Challenge' communicative predicate described as part of the protocol (page 79), one of the pre-conditions for challenging a rejection (or assertion) is that there be no 'reason' for the rejection (assertion respectively) in the agent's knowledge-base. Could it not be possible (in the context of this thesis) that there is a reason for rejecting, as well as a (counter-)reason for not rejecting in the agent's knowledge-base at the same time? Or is it meant here that 'the reason for rejecting' is stronger than 'the reason for not rejecting' in an argumentative semantic/heuristic sense?

- Also, the only valid response following a Challenge is for the other agent to Assert the justification (H). Could it not be possible (in the context of this thesis) for the agent that is to respond to *not* have a justification? What if the agent has no justification (if possible), how would the dialogue then proceed? What is meant by justification - that the justification is 'valid' according to the agent's knowledge-base, or is justification here meant more simply in a kind of deductive sense?

Hope that makes sense.
The response...
I see. I reckon it should be not that difficult. You may need to define your own domain language (one that describes your context or argumentation schema/modal). Then link that domain language with the comm. language and protocol defined in the thesis. If you wish to do this formally, then, you might need to alter some of the rules of the axiomatic and operational semantics to suit your application.

OK. Before I get to your specific question, I would recommend you to read the AIJ paper, which followed this thesis (instead of the thesis). This is better than reading the version in the thesis, as I introduced some minor alterations afterwards to both the axiomatic semantics and the operational semantics. Since AIJ doesn't allow on-line appendices we also published the complete semantics as a different technical paper. The links to both these documents are:

AIJ paper - http://users.ecs.soton.ac.uk/nnc/docs/aij09.pdf
Tech report - http://eprints.ecs.soton.ac.uk/16851/2/techreport.pdf

Now to your question.

In defining the comm. language I used a notion similar to operation overloading. In other words, certain language predicates are used for more than one, similar, but not identical, purpose. The objective is to limit the number of language predicates and not unnecessarily duplicate. For instance both the proponent and respondent can use the Open-Dialogue predicate, but they would have different pre- and post- conditions. Thus, the distinction becomes clear at the semantics level (both axiomatic and operational semantic level) and not necessarily at the syntax level. Both Challenge and Assert are used this way. In particular, Challenge locution can be used for two purposes (i) by the proponent to challenge the reason for rejecting a proposal (ii) by either the proponent or respondent to challenge the reason for a particular assertion. This is also there in the use of Assert, Close-Dialogue locutions.

Q1

As mentioned above, Challenge is used for two purposes. This question, as I understand it, is related to the first purpose, Challenging the reason for rejection (the second being Challenging a particular Assertion)

In more detail, the respondent may chose to either accept a particular proposal or reject it. This decision is based on the respondent's R2 decision mechanism (see page 951 on the AIJ paper or page 85 on the thesis).

Yes, you are right the respondent may have zero or more reasons for accepting a particular proposal and also zero or more reasons for rejecting. The decision is based on which is the more compelling reason.

My agents are computational agents who attempts to maximise utility. So, they calculate what is the cost vs benefit in this proposal. If the benefit is more accept otherwise reject. From an argumentation sense this can be if the reason(s) for accepting is more stronger than the reason(s) for rejecting accept otherwise reject.

So when the proponent Challenges (the reason for rejecting a particular proposal), the respondent will pass on its reason(s). It would say it was compelled to reject because of this and this reason (to reject) was much stronger than this and this reason (to accept).

If this reason is in conflict with the Proponent's knowledge-base, then the dialogue may shift an persuasive dialogue trying to correct any inconsistencies in each others reasons (proponents reasons why the proposal should be accepted vs respondents reasons on why it was rejected).

Q2

Yes, the only valid response to a Challenge locution is an Assert. See also Figure 4 (page 948 in the AIJ) and in more detailed level Figure B1 (page 979 in the AIJ).

Case 1: If the challenge was a challenge the reason for reject, then that reason is asserted. In my context, it would say I believe the benefit of the proposal due to this and this reason is this, but the cost of accepting this proposal due to this and this reason is this. So cost is higher than the benefit. Thus, the reason for rejection.

Case 2: If the challenge was the justification for a particular assertion he has made, then the reason behind such an assertion will be returned. This follows the schema in the form of deductive equations (5) and (6) in page 943 if the AIJ.

Yes, theoretically the reason can be null.

In the first case, may simply mean, of reason for rejection, I don't have any reason to accept (no reward), so I rejected. It doesn't make much complication. The proponent will analyse why it thinks it should accept (proponent's reason, if he have any) vs this given null reason from the respondent. If there is a conflict, argue that (why for instance he may have misunderstood the reward) or give an alternative proposal with a reward.

In the latter case, may mean, I don't have a reason for asserting X, but I believe X to be true. Again the other party will compare this with its own reason (why X should be false) and will either argue to correct the oponents' knowledge or correct its own knowledge.

Hope this clarifies things a bit.
Will continue skimming through the thesis anyway, despite the suggestion otherwise, before moving on to check out the journal paper and technical report.

Wednesday, 5 August 2009

56, Negotiating Socially Optimal Allocations of Resources

Can't believe I haven't blogged this paper ('Negotiating Socially Optimal Allocations of Resources', 2006, by Ulle Endriss et al) til now. Fundamental!

Sunday, 19 April 2009

Individual Transferable Quotas

An article (idea) I came across that makes use of distributed negotiation and social welfare concepts:

"Iceland has not quite proved that fish can sing, but it has shown they can continue to flourish, even when hunted by their main predator, man. Central to its policy are the individual transferable quotas given to each fishing boat for each species on the basis of her average catch of that fish over a three-year period. This settles the boat’s share of the total allowable catch of that fish for the entire country. The size of this total is announced each year on the basis of scientific advice from the independent Marine Research Institute.

Subject to certain conditions, quotas can be traded among boats. Bycatch must not be discarded. Instead it must be landed and recorded as part of that boat’s quota. If she has exhausted her quota, she must buy one from another boat, though 20% of a quota may be carried forward a year, and 5% of the next year’s quota can be claimed in advance..."


(Source: The Economist, January 3rd 2009)

Friday, 6 February 2009

49, An Empirical Study of Interest-Based Negotiation

Some notes noted whilst reading 'An Empirical Study of Interest-Based Negotiation' (2007) by Philippe Pasquier, Liz Sonenberg, Iyad Rahwan et al.

Assumptions of the paper (some which differ in my work (in progress)):
  • The resources are not shared and all the resources are owned. Agents also have a finite amount of "money", which is part of the resources and it is the only divisible one.
  • Uses numerical utility values ("costs", "benefits", "payments" etc based on this).
  • Negotiation restricted to 2 agents.
  • All agents have shared, common and accurate knowledge.
  • No overlap between agents' goals, plans, needed resources etc, which avoids the problems of positive and negative interaction between goals and conflicts for resources.
  • Both (i.e. all) agents use the same strategy. (Manipulable given that agents are out to maximise individual gains? Maybe but agents are assumed to be truthful.)

Additionally: "Agents do not have any knowledge about the partner's utility function (not even a probability distribution) and have erroneous estimations of the value of the resources not owned." It seems the primary benefit of IBN in this paper is to explore how agents can correct such erroneous information. (Agents trust each other not to lie about resource valuations.) A comparison is made between agents capable of bargaining only and agents capable of bargaining and reframing.

Content of the paper:

  • Introduction and Motivations
  • Agents with hierarchical goals (/plans)
  • The Negotiation Framework (Bargaining and Reframing Protocols/Strategies)
  • Simulation and Example
  • Experimental Results (Frequency and Quality of the deals; Negotiation complexity)
  • Conclusion and Future Work

Thursday, 15 January 2009

48, A Multi-Agent Resource Negotiation for the Utilitarian Social Welfare

Very good point about compensatory side payments (and limits of agent budgets) on page 4 (Section 2 - Transaction).

Also, good summary of Toumas Sandholm's peer-to-peer negotiation work (rational and non-rational sequences of transactions, optima, etc) on page 4 (Section 2.1 - Convergence).

Nice conclusion to return back to.

48, A Multi-Agent Resource Negotiation for the Utilitarian Social Welfare

Quite related to my (work in progress) paper 'On the benefits of argumentation for negotiation'. The paper studies various "agent behaviours" in order to identify which one leads the most often (by means of local interactions between the agents) to an (global? T(ransaction)-global?) "optimal" resource allocation.

Main contribution of the paper: Providing/designing/exhibiting an (explicit negotiation) process that is able to converge, in practice, either towards a global optimum, or towards a near optimal solution (resource allocation). Also, to compare the social value of the resource allocation that is reached at the end of the negotiation process with the globally optimum social value (obtained by means of a 0-1 linear program).

Not sure how this work differs from Andersson & Sandholm's (1999) [47] except in considering incomplete 'contact networks'.

Assumptions of the paper:
- Bilateral Transactions (i.e. transactions betweens 2 agents only).
- Positive additive utility function which is comparable between agents.
- Resources are discrete, not shareable, not divisible, not consumable (static) and unique.
- No compensatory side payments.
- Sequential negotiations, i.e. only one agent at a time is able to (initiate) negotiation, though this does not seem significant in affecting the quality of the (social welfare of the) final allocation reached.
- (Implicitly:) Agents are truthful in reporting utilities. (This works in the case of "socially" transacting agents since agents are out to maximise social welfare and not individual welfare).
- All agents in a "contact network" (agent system) must use the same transaction type.

Content of the paper:
- Introduction (MARA problem; Contact network; Social welfare)
- Transaction (Convergence; Acceptability criteria; Transaction type; Communication Protocol)
- Experiments (Experiment protocol; Evaluation criteria; Optimal value determination)
- Social Gift (Behaviour variants; Behaviour efficiency; Proof of convergence; Egalitarian efficiency of the social gift)

Linking it to my work:
- It may be an idea to define argumentative negotiation policies that are based on "rational transactions" as well as "social transactions" (gifts, swaps and cluster-swaps) and to compare outcomes from each.
- What if agents could use gift, swap and cluster-swap transactions intermittently, as well as transactions involving multiple (3+) agents? Would that improve outcomes of negotiation (wrt the global optimum)? The former (mixing transaction types) is not considered in this paper. The latter (multi-agent transactions) is not possible (using the communication protocol of figure 1).
- Could interest-based negotiation (exchanging arguments etc) offer benefits in terms of path to solution in the original set-up as well as the two-additional set-ups described in the previous point?

Tuesday, 30 December 2008

47, Time-Quality Tradeoffs in Reallocative Negotiation with Combinatorial Contract Types

Just read 'Time-Quality Tradeoffs in Reallocative Negotiation with Combinatorial Contract Types' (1999) by Martin Andersson and Tuomas Sandholm following on from reading [46] yesterday. Some thoughts:

Nice discussion of distributed reallocative negotiation "versus" (centralized) (combinatorial) auctions at the end of page 1 continuing on page 2.

Multiagent Travelling Salesman Problem (page 2). Interesting.

The contracting system between agents ("contract sequencing") described on pages 4 and 5 is in essence an exhaustive search. Naturally, slow and cumbersome. Multi-agent dialogues and interest-based negotiation could perhaps play a role here. Also, no "algorithm" (contracting system) provided for OCSM-contracts, or even, contracts of mixed/different types.

I like the presentation of the results, i.e. comparing the different contract types in terms of (i) the outcomes (solution quality in terms of social welfare) reached, and (ii) the number of contracts tried and performed before an (local) optimum is reached.

Monday, 29 December 2008

46, Contract Types for Satisficing Task Allocation: I Theoretical Results

Classification of contract types below taken from 'Contract Types for Satisficing Task Allocation: I Theoretical Results' (1998) by Tuoumas W. Sandholm. Very very important/related paper to keep referring back to. Will need to realise the OCSM-contract to achieve completeness in my work.

O-contract: one task given by an agent i to an agent j (+ contract price i pays to j for handling the task set).

C-contract: a cluster (more than 1) of tasks given by an agent i to an agent j (+ contract price i pays to j for handling the task set).

S-contract: swaps of tasks where agent i subcontracts a (single) task to agent j and vice-versa (+ amount i pays to j and amount j pays to i).

M-contract: A multi-agent contract involving at least three agents, wherein each agent involved gives away a single resource to another agent (+ payment).

Each contract type above is necessary (and avoids some of the local optima that the other three do not) but is not sufficient in and of itself for reaching the global optimum via "individually rational" contracts.

OCSM-contract: combines/merges characteristics of the above contract types into one contract type - where the ideas of the above four contract types can be applied simultaneously (atomically).

Sunday, 28 December 2008

44, Towards Interest-Based Negotiation

Goal Arguments (which justify agents' adoption of certain goals) in the paper 'Towards Interest-Based Negotiation' (TIBN) by Iyad Rahwan et al take the form ((SuperG,B,SubG):G), i.e. an agent adopts (intends) a goal (G) because it believes G is instrumental to achieve its supergoal (SuperG), believes the context (B) which justifies G to be true and believes the plan (SubG) for achieving G to be achievable.

We identify here a few forms of attacks allowed (in TIBN) on these Goal Arguments for which we have equivalents in our multi-agent setting of 'On the Benefits of Argumentation for Negotiation' (OBAN).

(1)
- Attack in TIBN: For a Goal Argument ((SuperG,B,SubG):G), show ¬b where b is a belief in B, i.e. disqualifying a context condition.
- Similar attack in OBAN: Agent Y argues "I do not have resource R", where "you have resource R" is a belief agent X has (& utters) as part of either requesting R from Y or requesting R2 from Y. In the latter case, X's prior argument would be: "Y does not need R2 because Y has R (which alone is sufficient for fulfilling Y's goal)".

(2)
- Attack in TIBN: For a Goal Argument ((SuperG,B,SubG):G), show ¬p where p is a goal in SubG, i.e. a subgoal is unachievable.
- Similar attack in OBAN: Agent Y argues "I need to retain resource R (and hence your (sub)goal of obtaining R is unachievable)", where R is a resource agent X requests from Y.

(3)
- Attack: For a Goal Argument ((SuperG,B,SubG):R), show set of goals P such that achieve(P,G) where G is a goal in SuperG and R is not a goal in P, i.e. there is an alternative plan P which achieves the supergoal G and does not include R.
- Similar attacks in OBAN (in the case where R is a resource ("goal" in the language of TIBN) agent X requests from agent Y):
--- X argues "you do not need resource R (since you have a resource R2 that alone is sufficient for fulfilling your supergoal G)".
--- X argues "you do not need resource R (since I have a resource R2 that alone is sufficient for fulfilling your supergoal G and I will exchange with you R2 for R)".

Monday, 22 December 2008

27, On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation

Some thoughts following on from re-reading 'On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation' (2007) by Iyad Rahwan et al with my aamas-submitted (not accepted) paper in mind:
  • What is presented is a protocol and not a generative model as such.

  • The negotiation framework consists of (/is limited to) two agents.

  • Agents' preferences over (sets of) resources is specified as a (pre-given) numerical utility function. "Deals" between agents (to reallocate resources) make use of "side payments" based on this utility function.

  • The relationship "sub" (linking a goal to "sub" -goals and/or -resources needed to achieve it) seems shared between all agents (though agents have no prior knowledge of each other's main goals or preferences).

  • Much in this paper rests on the existence/allowance of "partial plans" (wherein leaf nodes may be goals as well as resources) and the setting of positive interaction between agents' "shared"/"common" goals such that an agent may benefit from a common goal (or sub-goal) achieved by the other agent.

EUMAS 08 Conference

Attended and presented at the EUMAS conference last week in Bath. Received some useful questions/feedback to think about, as follows:
  • The title ('On the benefits of argumentation for negotiation - preliminary version') is a bit misleading (given the narrow scope of this work). Also, should think about potential/real drawbacks of using argumentation for negotiation as well as its benefits, i.e. look at things more objectively.

  • Look at game-theoretic models. Contrast my work with theirs. Agents providing reasons/justifications with requests as in this paper would not be enough (in and of itself) to argue argumentation-based negotiation (ABN) over game-theoretic (GT) approaches. For example, the act of an agent providing a reason with a request may not always be advantageous; providing a reason could rule out an "offer" (in the mind of the recipient agent) that would otherwise have been acceptable. It may (also) not always be strategically advantageous for an agent to provide reasons with its requests since the recipient agent could use this against the requesting agent.

  • Agents providing reasons with dialogue moves doesn't increase the number of solutions possible unless agents provide their overlying goals with their reasons, like the "hammer and nail" example in an earlier paper. Otherwise agents are only justifying their dialogue moves.

  • How come reasons can be provided with a 'refuse' response but not with an 'accept'?

  • The work of Nicolas Hormazabal ('Trust aware negotiation') could be useful.

  • The presentation was perhaps overly simplistic. Looks a bit like I have created/used a problem/solution to justify argumentation and not the other way round, i.e. rather than creating/using an argumentative approach to solve a real problem. Also, sequences/concurrency of the dialogues: it was not clear from the presentation; it came across as though only one dialogue move/instance is made at a time in sequence regardless of the number of agents in the agent system.

  • A story from Cuba (spurred by my bilateral agent negotiation approach): each person prefers the house of his neighbour (only) over his own, creating a big circle of potential swaps. Eventually, (if/) once the circle is established/known, each person moves into the house of his neighbour resulting in a happier society. Point being: why not have everyone report their desires/preferences publicly and have the final result/allocation decided upon centrally like in an auction? Wouldn't that be easier?

Tuesday, 30 October 2007

34, A Verifiable Protocol for Arguing about Rejections in Negotiation

Notes taken from 'A Verifiable Protocol for Arguing about Rejections in Negotiation' (2005), by Jelle van Veenen and Henry Prakken

1, Introduction

2, Negotiation and Argumentation

Speech acts and replies in negotiation with embedded persuasion:

Negotiation

Act: request(a)
Attacks: offer(a'), withdraw
Surrenders:

Act: offer(a)
Attacks: offer(a') (a /= a'), reject(a), withdraw
Surrenders: accept(a)

Act: reject(a)
Attacks: offer(a') (a /= a'), why-reject(a), withdraw
Surrenders:

Act: accept(a)
Attacks:
Surrenders:

Act: why-reject(a)
Attacks: claim(¬a), withdraw
Surrenders:

Act: withdraw
Attacks:
Surrenders:

Persuasion

Act: claim(a)
Attacks: why(a)
Surrenders: concede(a)

Act: why(a)
Attacks: argue(A) (conc(A) = a)
Surrenders: retract(a)

Act: argue(A)
Attacks: why(a) (a is in prem(A)), argue(B) (B defeats A)
Surrenders: concede(a) (a is in prem(A) or a = conc(A))

Act: concede(a)
Attacks:
Surrenders:

Act: retract(a)
Attacks:
Surrenders:

The speech acts above show the combination of languages for negotiation and persuasion. The negotiation is extended with the why-reject locution, which allows a negotiation to shift into a persuasion subdialogue.

3, An Example

4, Conclusion

Illocutions for Persuasive Negotiation (2)

The dialogue primitives (performatives) described in 'Logic agents, dialogues and negotiation: an abductive approach' (2001) are of the form tell(a,b,Move,t) where a and b are the sending and the receiving agents, respectively, t represents the time when the primitive is uttered, and Move is a dialogue move, recursively defined as follows:

- request(give(R)) is a dialogue move, used to request a resource R;

- promise(give(R),give(R')) is a dialogue move, used to propose and to commit to exchange deals, of resource R' in exchange for resource R;

- if Move is a dialogue move, so are
--- accept(Move), refuse(Move) (used to accept/refuse a previous dialogue Move)
--- challenge(Move) (used to ask a justification for a previous Move)
--- justfiy(Move) (used to justify a past Move, by means of a Support)

There are no other dialogue moves, except the ones given above.

Monday, 29 October 2007

Illocutions for Persuasive Negotiation (1)

In the paper 'A Framework for Argumentation-Based Negotiation' (1997) the authors (Carles Sierra et al) discuss three types of illocutions that serve a persuasive function in negotiation:
(i) threats — failure to accept this proposal means something negative will happen to the agent;
(ii) rewards — acceptance of this proposal means something positive will happen to the agent; and
(iii) appeals — the agent should prefer this option over that alternative for this reason.

The illocutionary acts can be divided into two sets, corresponding to negotiation particles (those used to make offers and counter offers) (offer, request, accept, reject) and corresponding to persuasive particles (those used in argumentation) (appeal, threaten, reward).

The negotiation dialogue between two agents consists of a sequence of offers and counter offers containing values for the issues. These offers and counteroffers can be just conjunctions of ‘issue = value’ pairs (offer) or can be accompanied by persuasive arguments (threaten, reward, appeal). ‘Persuasion’ is a general term covering the different illocutionary acts by which agents try to change other agent’s beliefs and goals.

appeal is a particle with a broad meaning, since there are many different types of appeal. For example, an agent can appeal to authority, to prevailing practice or to self-interest. The structure of the illocutionary act is
appeal(x,y,f,[not]a,t),
where a is the argument that agent x communicates to y in support of a formula f.

threaten and reward are simpler because they have a narrower range of interpretations. Their structure,
threaten(x,y,[not]f1,[not]f2,t)
reward(x,y,[not]f1,[not]f2,t)
is recursive since formulae f1 and f2 again may be illocutions. This recursive definition allows for a rich set of possible (illocutionary) actions supporting the persuasion.

Agents can use the illocutions according to the following negotiation protocol:
1. A negotiation always starts with a deal proposal, i.e. an offer or request. In illocutions the special constant ‘?’ may appear. This is thought of as a petition to an agent to make a detailed proposal by filling the ‘?’s with defined values.
2. This is followed by an exchange of possibly many counter proposals (that agents may reject) and many persuasive illocutions.
3. Finally, a closing illocution is uttered, i.e. an accept or withdraw.

-----

In the paper 'Arguments, Dialogue, and Negotiation' (2000) the authors (Leila Amgoud et al) present a number of moves, describe how the moves update the Commitment Stores (the update rules), give the legal next steps possible by the other agent after a particular move (the dialogue rules), and detail the way that each move integrates with the agent’s use of argumentation (the rationality rules). The moves are classified as follows:

(i) Basic Dialogue Moves (assert(p), assert(S), question(p), challenge(p));
(ii) Negotiation Moves (request(p), promise(p => q));
(iii) Responding Moves (accept(p), accept(S), accept(p => q), refuse(p), refuse(p => q)).

The authors argue that this set of moves is sufficient to capture the communication language of the above-discussed paper.