Tuesday 24 November 2009

63, A Redefinition of Arguments in Defeasible Logic Programming

A paper ('A Redefinition of Arguments in Defeasible Logic Programming', 2009, by Ignaccio Viglizzo, Fernando Tohme, Guillermo Simari) I had to read and be a 'discussant' for at the 'Uses of Computational Argumentation' workshop (held at Washington DC) I attended earlier this month (part of the AAAI 2009 Fall Symposium Series).

Good paper. Well written. Easy to follow/understand. Quite similar in what it does to the paper I presented ('Assumption-Based Argumentation for Communicating Agents') at the workshop - doing for DLP (partially) what I did for ABA, i.e. a step towards making it applicable for multi-agent contexts.

Tuesday 17 November 2009

Attempts at summing up my PhD

A couple of attempts with two different people from the last two days to sum up my PhD...

Attempt 1:
... the rough area is "distributed artificial intelligence". The concept of "artificial intelligence" (AI) is to make smart computer programs. Within AI is the concept of "agents" - autonomous, proactive, reactive computer programs - that, given a goal or task, should be able to go off, work out how best to achieve it and achieve it. I look at a specific problem (can explain upon request) making use of "multi-agent systems", where multiple "agents" have to collaborate/cooperate/dialogue (can elaborate upon request) to achieve their individual goals, and as a result the "society" of agents also benefits...

Attempt 2:
... the rough area is "distributed artificial intelligence", or, more specifically "multi-agent systems". An "agent" is an "autonomous", "proactive", "reactive" computer program - that, given a "goal" or "task", should be able to go off, work out how best to achieve it and achieve it. I model my agents, their internal reasoning/decision-making mechanisms, using a form of logic based on "argumentation", and the interactions between the agents by means of communication protocols termed "dialogues". Concretely, I am trying to apply my framework to a specific problem (the "resource reallocation problem" - can explain upon request) where multiple agents have to collaborate/cooperate/dialogue to achieve their individual goals, and as a result the "society" of agents also benefits...

I thought I was getting better at explaining my PhD topic. From the responses I received, apparently not!

Wednesday 23 September 2009

62, Agents that reason and negotiate by arguing

A seminal piece of work ('Agents that reason and negotiate by arguing', 1998, Simon Parsons, Carles Sierra, Nick R Jennings). Finally went through it after 3 years! Need to compare it with my (A)ABA-based ABN framework.

Nice examples of 'negotiation' dialogues (proposal, critique, counter-proposal, explanation) in Section 2.1. Would be nicer if they can be *generated*.

Can't see how the stuff in Sections 3-5 (agent architecture etc) links to the negotiation protocol in Section 2.2.

No concept of 'assumptions' or the ability for agents to reason (make decisions/utterances) despite incomplete information. We allow for this.

No implementation, though it is claimed there is a clear link between the formal (agent architecture) model and its practical instantiation. We support our framework with an implementation.

The framework is based on an ad hoc system of argumentation. Arguments can be classified into rough classes of acceptability, but this is not enough to determine the acceptability of arguments. Also, only inconsistency *between* agents is considered; inconsistency that arises within an agent is not considered/handled. We base our framework on a general argumentation system (AABA) for which the argument acceptability semantics are clearly defined.

This is what I intend to include in the Related Work section of my forthcoming "argmas09paper":
In [61] a negotiation language and protocol is presented that allows for the exchange of complex proposals which can include compelling arguments for why a proposal should be adopted. Whilst [61] does not concentrate on the way in which arguments are built and analysed, the work is extended in [62] by indicating how argumentation can be used to construct proposals, create critiques, provide explanations and meta-information. However, even in [62], further expansion is required for agents to be able to generate and rate arguments, and for any kind of implementation to be produced. In particular, the acceptability classes used in [62] to rank arguments are not sufficient to resolve inconsistencies that may arise within and between agents. A more fine-grained mechanism is required. We use an existing argumentation framework (AABA) for this purpose, that is able to build and determine the acceptability of arguments, even as the knowledge bases of agents change over time (as a result of the dialogues). The AABA framework also allows agents to make assumptions, enabling agents to make decisions even despite incomplete information. Lastly, we supplement our formal model with an implementation.

Friday 18 September 2009

61, A framework for argumentation-based negotiation

Old paper ('A framework for argumentation-based negotiation', 1998, Carles Sierra et al) but some really good ideas for using negotiation (offer, request, accept, reject and withdraw acts) with persuasion (appeal, threaten and reward acts). However, like most other papers, not fully worked out / generative.

To its advantage, the framework is for multi- (i.e. more than two) agent settings: "Deals are always between two agents, though an agent may be engaged simultaneously in negotiation with many agents for a given deal."

The 'attacks' relationship between "argument pairs" (i.e. argument Arg supporting a formula p) is assumed to be a primitive notion, though however argument pairs are not assumed to be primitive notions. Defining such an 'attacks' relationship could get messy!

An authority relation (between agent roles) is used as the mechanism for comparing arguments, i.e. who puts forward an argument is as important (maybe more so) than what is said. Potentially this doesn't quite make sense in an argument evaluation sense - depends what is meant by "argument". See last paragraph of Section 4.1.1 of 'Argumentation-Based Negotiation' (1998).

Wednesday 16 September 2009

60, Dialogue games that agents play within a society

Went through this journal paper ('Dialogue games that agents play within a society', 2009, Nishan C. Karunatillake et al) and the accompanying technical report ('Formal Semantics of ABN Framework', 2008, Nishan C. Karunatillake et al) following going through the main author's thesis. Questions similar to the thesis (see 59). In addition, this is what I plan on including in my forthcoming (argumentation-based negotiation social optimality) paper...

"... The argument-based negotiation framework of [60] is supplemented with a number of concrete negotiation strategies which allow agents to exchange arguments as part of the negotiation process. The example scenario/context considered allows for multiple (more than two) agents. However, contrary to our approach, the semantics of arguments is not considered. Instead, the focus is on using argumentation as a metaphor for characterising communication among agents. Also, deals involving more than two agents are not possible, as is required in our resource allocation setting in order to reach optimal allocations. ..."

Tuesday 15 September 2009

59, Argumentation-Based Negotiation in a Social Context

This ('Argumentation-Based Negotiation in a Social Context', Nishan C. Karunatillake, 2006) is the third thesis I have gone through now. I really liked, and read fully, the first three chapters. Quite a few question marks penned when going through the protocol and operational semantics in Chapter 3 - questions regarding the method of arguing (challenging and asserting). Apparantly these questions are addressed in a later journal paper and technical report, which I will go through now. It is worth noting that the negotiation/argumentation strategies defined in later chapters are far (in my opinion) from using the full capacity of the framework defined in Chapter 3.

I really liked the scenario presented in Chapter 4.1. Very good - lots of scope for play/conflicts/etc. Though however, the system model following this in Chapter 4.2 becomes very mathematics/number-based. The "argue" method doesn't really argue. It is quite similar to my "argmas09" paper - the responding agent provides a reason for rejection which the proposing agent incorporates into its knowledge-base for future proposals.

The strategies defined in Chapter 5.1 for the experimentation (empirical analysis) I thought were rather random - seems to be no justification at all for these strategies over any other. I couldn't quite see the general applicability of the results presented in Chapter 5.3 beyond the specific application setting of this paper. I skipped/glossed over to the summary.

Chapter 6 proceeds by simplifying the experimental scenario defined in Chapter 4.1 for the argumentation strategies to be defined in this chapter. Really not clear what the defeat-status computation (to determine the validity of a claim/premise) used in these strategies is. Also, I couldn't quite work out how the argumentation was done in the strategies - not clear what is being challenged/asserted. This chapter could have done with some examples.

Friday 4 September 2009

Karunatillake's Thesis - Chapter 3

My first set of questions sent to Nishan C. Karunatillake regarding chapter 3 of his thesis:
Thanks for the response.

Sorry if my questions are really technical/low-level. It's just that I am developing a model for multi-agent argumentative negotiation, I came across your work and I am trying to see if my model could map to your language, protocol etc. I have not read beyond Chapter 3 yet, so I apologise if some of my questions are answered later. Please let me know if that is the case.

I'll try ask only a few questions at a time, as occurred to me whilst working sequentially through the chapter, so as not to bombard you and in case questions that occurred later become clear.

Here goes...

- Looking at the 'Challenge' communicative predicate described as part of the protocol (page 79), one of the pre-conditions for challenging a rejection (or assertion) is that there be no 'reason' for the rejection (assertion respectively) in the agent's knowledge-base. Could it not be possible (in the context of this thesis) that there is a reason for rejecting, as well as a (counter-)reason for not rejecting in the agent's knowledge-base at the same time? Or is it meant here that 'the reason for rejecting' is stronger than 'the reason for not rejecting' in an argumentative semantic/heuristic sense?

- Also, the only valid response following a Challenge is for the other agent to Assert the justification (H). Could it not be possible (in the context of this thesis) for the agent that is to respond to *not* have a justification? What if the agent has no justification (if possible), how would the dialogue then proceed? What is meant by justification - that the justification is 'valid' according to the agent's knowledge-base, or is justification here meant more simply in a kind of deductive sense?

Hope that makes sense.
The response...
I see. I reckon it should be not that difficult. You may need to define your own domain language (one that describes your context or argumentation schema/modal). Then link that domain language with the comm. language and protocol defined in the thesis. If you wish to do this formally, then, you might need to alter some of the rules of the axiomatic and operational semantics to suit your application.

OK. Before I get to your specific question, I would recommend you to read the AIJ paper, which followed this thesis (instead of the thesis). This is better than reading the version in the thesis, as I introduced some minor alterations afterwards to both the axiomatic semantics and the operational semantics. Since AIJ doesn't allow on-line appendices we also published the complete semantics as a different technical paper. The links to both these documents are:

AIJ paper - http://users.ecs.soton.ac.uk/nnc/docs/aij09.pdf
Tech report - http://eprints.ecs.soton.ac.uk/16851/2/techreport.pdf

Now to your question.

In defining the comm. language I used a notion similar to operation overloading. In other words, certain language predicates are used for more than one, similar, but not identical, purpose. The objective is to limit the number of language predicates and not unnecessarily duplicate. For instance both the proponent and respondent can use the Open-Dialogue predicate, but they would have different pre- and post- conditions. Thus, the distinction becomes clear at the semantics level (both axiomatic and operational semantic level) and not necessarily at the syntax level. Both Challenge and Assert are used this way. In particular, Challenge locution can be used for two purposes (i) by the proponent to challenge the reason for rejecting a proposal (ii) by either the proponent or respondent to challenge the reason for a particular assertion. This is also there in the use of Assert, Close-Dialogue locutions.

Q1

As mentioned above, Challenge is used for two purposes. This question, as I understand it, is related to the first purpose, Challenging the reason for rejection (the second being Challenging a particular Assertion)

In more detail, the respondent may chose to either accept a particular proposal or reject it. This decision is based on the respondent's R2 decision mechanism (see page 951 on the AIJ paper or page 85 on the thesis).

Yes, you are right the respondent may have zero or more reasons for accepting a particular proposal and also zero or more reasons for rejecting. The decision is based on which is the more compelling reason.

My agents are computational agents who attempts to maximise utility. So, they calculate what is the cost vs benefit in this proposal. If the benefit is more accept otherwise reject. From an argumentation sense this can be if the reason(s) for accepting is more stronger than the reason(s) for rejecting accept otherwise reject.

So when the proponent Challenges (the reason for rejecting a particular proposal), the respondent will pass on its reason(s). It would say it was compelled to reject because of this and this reason (to reject) was much stronger than this and this reason (to accept).

If this reason is in conflict with the Proponent's knowledge-base, then the dialogue may shift an persuasive dialogue trying to correct any inconsistencies in each others reasons (proponents reasons why the proposal should be accepted vs respondents reasons on why it was rejected).

Q2

Yes, the only valid response to a Challenge locution is an Assert. See also Figure 4 (page 948 in the AIJ) and in more detailed level Figure B1 (page 979 in the AIJ).

Case 1: If the challenge was a challenge the reason for reject, then that reason is asserted. In my context, it would say I believe the benefit of the proposal due to this and this reason is this, but the cost of accepting this proposal due to this and this reason is this. So cost is higher than the benefit. Thus, the reason for rejection.

Case 2: If the challenge was the justification for a particular assertion he has made, then the reason behind such an assertion will be returned. This follows the schema in the form of deductive equations (5) and (6) in page 943 if the AIJ.

Yes, theoretically the reason can be null.

In the first case, may simply mean, of reason for rejection, I don't have any reason to accept (no reward), so I rejected. It doesn't make much complication. The proponent will analyse why it thinks it should accept (proponent's reason, if he have any) vs this given null reason from the respondent. If there is a conflict, argue that (why for instance he may have misunderstood the reward) or give an alternative proposal with a reward.

In the latter case, may mean, I don't have a reason for asserting X, but I believe X to be true. Again the other party will compare this with its own reason (why X should be false) and will either argue to correct the oponents' knowledge or correct its own knowledge.

Hope this clarifies things a bit.
Will continue skimming through the thesis anyway, despite the suggestion otherwise, before moving on to check out the journal paper and technical report.

Monday 24 August 2009

Assumption-based Argumentation for Multiagent Systems

Had my paper ('Assumption-based Argumentation for Multiagent Systems', 2009, Adil Hussain, Francesca Toni) accepted at the 'Computational Uses of Argumentation' workshop to take place as part of the AAAI 2009 Fall Symposium Series, 4-7th November 2009, Washington DC. Need to make a few small changes before submitting the second/final version, as requested by the reviewers.

Note: In future work, building upon and taking this paper further, I need to think about and work on the following:
1) Agents having incorrect, out of date or inconsistent beliefs, as well as inconsistent beliefs.
2) Allowing general inference rules in the private belief base of agents, and not just facts.

Wednesday 19 August 2009

58, Assumption-Based Argumentation

This chapter ('Assumption-Based Argumentation' by Phan Minh Dung, Robert A Kowalski, Francesca Toni) written for the book 'Argumentation in Artificial Intelligence' (2009) contains the representation of an argument as a tree, which I adapt in my paper submitted to 'Computational Uses of Argumentation'. The trees in this paper display the structural relationships between claims and assumptions, justified by the inference rules.

Wednesday 5 August 2009

56, Negotiating Socially Optimal Allocations of Resources

Can't believe I haven't blogged this paper ('Negotiating Socially Optimal Allocations of Resources', 2006, by Ulle Endriss et al) til now. Fundamental!

Tuesday 14 July 2009

PrologBeans Intro

I use PrologBeans to interface Jade (built on Java) and Prolog. Google it for details. Roughly, what you need to do is:
  1. Start up an agent (a Java file). E.g. 'AgentNegotiatior.java' as contained in my 'argmas09modified' directory.
  2. From this, make a connection to a Prolog server using 'PBConnect.java' (leave this unchanged) which requires a Prolog file like 'run.pl'. The Prolog file (in my case 'run.pl') loads up 'pbconnection.pl' (which I have modified for my purposes) and whatever other Prolog files you need to load (containing your Prolog clauses) before calling 'main' (defined in 'pbconnection.pl').
  3. Run your queries from the Java file using a 'PrologSession' instance.
  4. Shut down the server upon completion from your Java file.

Hope that makes sense.

Monday 22 June 2009

55, CaSAPI: a system for credulous and sceptical argumentation

This paper ('CaSAPI: a system for credulous and sceptical argumentation', 2007, Dorian Gaertner, Francesca Toni) was the first to generalise ABA frameworks to allow multiple contraries.

Saturday 13 June 2009

Dispute Derivation Choices

Excellent description in 'Computing Arguments and Attacks in Assumption-Based Argumentation' (2007, Dorian Gaertner, Francesca Toni) of the five types of choices that need to be made in any implementation of the structured AB-dispute derivation algorithm, i.e.

- Choice of player (P or O);
- Choice of argument (in P or O);
- Selection function (a sentence from the chosen argument);
- Choice of (inference) rule (if chosen sentence is not assumption);
- Choice to ignore (if the opponent is selected and the selection function returns an assumption).

It is stated in this paper that the 'choice of argument' does not apply to AB-dispute derivations, since arguments aren't explicit in AB-dispute derivations. I don't think this is correct. If the opponent is selected in an AB-dispite derivation, the choice of 'S in Oi' is like a 'choice of argument' and the choice of 'sigma in S' is determined by the 'selection function'.

Thursday 11 June 2009

54, Argumentation Based on Classical Logic

Really well written paper ('Argumentation Based on Classical Logic', 2009, Philippe Besnard, Anthony Hunter). Loads of examples throughout. I like the concept of an argument being 'more conservative' than another (i.e. it is "less demanding on the support and less specific about the consequent") and that of a 'maximally conservative undercut'. The argument trees considered are "merely a representation of the argumentation" and (differently to 'abstract argument systems') do not display cases where the argumentation is infinite and unresolved as being so.

Wednesday 10 June 2009

53, Hybrid Argumentation and its Properties

This paper ('Hybrid Argumentation and its Properties', 2008, Dorian Gaertner and Francesca Toni) presents a hybrid between abstract assumption-based argumentation. Good re-usable intro to the ABA framework in Section 2.

Saturday 6 June 2009

52, Argumentative Agent Deliberation, Roles and Context

This paper ('Argumentative Agent Deliberation, Roles and Context', 2002, Antonis Kakas and Pavlos Moraitis) presents an argumentation based framework based on 'Logic Programming without Negation as Failure' that makes use of three levels of rules (in the examples at least); 'object-level decision rules', 'role (or default context) priorities' and '(specific) context priorities'. Hints at using abduction for agents to make assumptions under incomplete knowledge but I didn't quite get it. Good deliberation examples making use of rules, priorities over rules and priorities over priorities over rules.

Friday 5 June 2009

'Sentence'

"a sentence is a 'formula' in which every occurrence of a variabl (if any) is within the scope of a quantifier for that variable."

(Introduction to Logic Programming (page 11), by Chrstopher John Hogger)

Friday 22 May 2009

maraIRAgents

Just finished the first version of the 'maraIRAgents' implementation (2 agents, 1+ resources each, 1 goal each, distributed fulfils plans).

Seems to run and not loop infinitely but identified a problem case, as follows:

-----

a1: goal(a1,g1), has(a1,r1), fulfils(r2,g1)
a2: goal(a2,g2), has(a2,r2), fulfils(r1,g1), fulfils(r2,g2)

-----

If a2 can communicate fulfils(r1,g1) to a1 then both agents end successfully but this doesn't happen.

Solution: Responding agent should only agree to a response if one of the two agents end up better off (similar to the condition for initiating a request). Otherwise it should refuse providing argument as such.

Wednesday 20 May 2009

Deliberation Examples

I need to think of an example for a multi-agent deliberation-like dialogue to include in a forthcoming paper. Here are some first tries:

--- 1 ---

swapAppointments(Ag1,Ag2,App1,App2) <- requires(Ag1,Req1), fulfils(App2,Req1), has(Ag1,App1), ¬fulfils(App1,Req1), canSwap(Ag2,App2,App1)

cantSwap(Ag,App1,App2) <- ¬has(Ag,App1)

cantSwap(Ag,App1,App2) <- has(Ag,App1), requires(Ag,Req), fulfils(App1,Req), ¬fulfils(App2,Req)

Assumptions = {¬fulfils(App,Req), canSwap(Ag,App1,App2)}

Contrary(¬fulfils(App,Req)) = fulfils(App,Req)
Contrary(canSwap(Ag,App1,App2)) = cantSwap(Ag,App1,App2)

Consider two concrete agents, ag1 and ag2, with initial private beliefs as follows:
Priv(ag1) = {has(ag1,app1), requires(ag1,fridayAppointment), fulfils(app1,morningAppointment), fulfils(app2,fridayAppointment)}
Priv(ag2) = {has(ag2,app2), requires(ag2,morningAppointment), fulfils(app2,morningAppointment)}

--- 2 ---

buy(House) <- withinBudget(House), goodLocation(House)

badLocation(House) <- farFromWork(House), badTransportLinks(House)

goodTransportLinks(House) <- nearBusStop(House), frequentBusService(House)

Assumptions = {goodLocation(House), badTransportLinks(House), frequentBusService(House)}

Contrary(goodLocation(House)) = badLocation(House)

Contrary(badTransportLinks(House)) = goodTransportLinks(House)

Contrary(frequentBusService(House)) = infrequentBusService(House)

Consider two concrete agents, ag1 and ag2, with initial private beliefs as follows:
Priv(ag1) = {withinBudget(house1), nearBusStop(house1)}
Priv(ag2) = {farFromWork(house1)}

--- 3 ---

watch(Ag1,Ag2,Film) <- criticallyAcclaimed(Film), willLike(Ag1,Film), willLike(Ag2,Film)

wontLike(ag2,Film) <- actor(Film,timRobbins), boring(Film)

¬boring(Film) <- actor(Film,morganFreeman), goodUserRating(Film)

Assumptions = {willLike(Ag,Film), boring(Film), goodUserRating(Film)}

Contrary(willLike(Ag,Film)) = wontLike(Ag,Film)

Contrary(boring(Film)) = ¬boring(Film)

Contrary(goodUserRating(Film)) = badUserRating(Film)

Consider two concrete agents, ag1 and ag2, with initial private beliefs as follows:
Priv(ag1) = {criticallyAcclaimed(shawshankRedemption), actor(shawshankRedemption,morganFreeman)}
Priv(ag2) = {actor(shawshankRedemption,timRobbins)}

------

Wednesday 6 May 2009

Reviewing the eumas08 negotiation policy

Consider an agent system consisting of 2 agents and 2 resources as follows:

a1 has r1 and needs r1, r2
a2 has r2 and needs r1, r2

According to the eumas08 negotiation policy (simple and reason-based procedures) both agents end unsuccessfully. However, a1 could end successfully if a2 gave it r2. Likewise, a2 could end successfully if a1 gave it r1. However, according to the policy, neither will make this sacrifice and thus an optimal (maximal) number of agents that end successfully is not reached.

Consider including offers/arguments of the form: "Your goal G of obtaining R1, ..., Rn is not achievable because ... so give me Ri".

Monday 4 May 2009

Computing for Kids

I need to demonstrate/explain Computing to young children (9 years of age) in a fun/interactive way. I thought of the following group exercises:

Find Median - split children into teams of 7 or 9, have each team make a line, give each child in the team a number (in no particular order), ask the children in their teams to work out the middle (median) number. The key is for them to first assign a captain and order themselves by their numbers (highest to lowest or lowest to highest).

Bubble Sort - split children into teams, have each team make a line with each child spaced out from the next by one metre, give each child in the team a number (in no particular order), ask the children to sort themselves (highest to lowest or lowest to highest) by only being allowed to speak to the person immediately in front or behind. Can't get around this problem by assigning a captain!

Resource Allocation - split children into teams of 7ish, have each team make a circle, give each child in the team an item (chocolate?) and a goal (item to obtain), ask the children to maximise the number of "happy" children in their teams. Need to think of cases involving conflict.

Friday 1 May 2009

Changes to Prolog deduction program

Made changes to my Prolog 'deduction' program to (properly) allow 'member', '\==' and '>' prefix predicates/operators to be used in the 'body' of 'rule' predicates.

Thursday 30 April 2009

Make goals known at outset?

Why not make agents' goals (as well as the initial system-wide resource allocation and resource-goal fulfils plans, as is currently being done) known at the outset, given that agents declare their goals freely during negotiation anyway?

Testing the time-stamp negotiation policy

Compiled and started tested the MARA (mutiagent resource allocation) time-stamp negotiation policy. Seems to be working fine except that agents seem to accept two or more requests/proposals for the same resource simultaneously... instead of delaying. Problem! Looking into it now.

The importance of stupidity in scientific research

"... What makes it difficult is that research is immersion in the unknown. We just don’t know what we're doing. We can’t be sure whether we're asking the right question or doing the right experiment until we get the answer or the result..."

Good article summarising what PhDs are about.

Saturday 25 April 2009

Revising the 'argmas09' negotiation policy to use time-stamps

Started revising the 'argmas09' negotiation policy to use time-stamps such that beliefs of agents are accumulated only rather than revised (i.e. added/removed). The case for investigating into trust (for verifying arguments/utterances of agents) and related matters seems more interesting than first anticipated.

Thursday 23 April 2009

Progress So Far

eumas 07 ('revised') paper
  • Agents have beliefs and desires over single resources, as well as (retractable) commitments to other agents over beliefs, desires and dialogue. Attaining any one of its desired resources achieves the agent's overall goal.
  • (Request-Response) Information-seeking dialogues are used to communicate beliefs and desires.
  • (Request-Response) Negotiation dialogues are used to swap single resources. An agent accepts a request to exchange resources if the resource to be received is one that it desires.
  • The negotiation policy is not complete. The case for reasons/argumentation to accompany the negotiation is put forward.

eumas 08 ('revised') paper / ('modified') implementation
  • The goal of an agent is to obtain a certain fixed set of ("needed") resources (one at a time).
  • Negotiation only (i.e. a Request followed by Accept or Refuse). No separate Information-seeking.
  • A request of an agent is to be given a certain resource by a certain agent, with an (optional) accompanying reason (i.e. "I need the resource and do not have it").
  • An acceptance has no accompanying reason. An agent accepts a request (to give away a resource) if it has and does not "need" the resource.
  • A refusal may have an accompanying reason, i.e. "I do not have the resource", "Some other agent has the resource", "I have but need the resource".
  • Two negotiation policies are compared for "effectiveness" and "completeness" - one in which agents exchange reasons, and one in which they don't. Both policies are complete but providing reasons improves "effectiveness" of an agent achieving its goal if possible or failing if not possible.
  • (See 'readme' file of 'eumas08modified' implementation for implementation notes.)

aamas 09 ('revised') paper / ('modified') implementation
  • Each agent has a (/one) named goal. Goals are fulfilled by single resources. A certain goal may be fulfilled by a choice of different resources. A certain resource may fulfil a choice of different goals.
  • Agents do not necessarily share "plans" (as to which resources fulfil which goals) at the outset. These are communicated (partially, as necessary) during negotiation.
  • Negotiation only (i.e. a Request followed by Accept or Refuse).
  • A request of an agent to another is either to be given a certain resource or to swap (single) resources, with (optional) accompanying reasons/arguments (i.e. a mixture of "needsToObtain", "notNeeds", "useful").
  • An acceptance has no accompanying reason. Agents agree to give away a resource if either they do not "need" it or they receive in return a resource of equal value.
  • A refusal may have accompanying reasons/arguments (i.e. a mixture of "needsToRetain", "notHas") plus useful additional information (i.e. alternative plans).
  • Two negotiation policies are compared for "effectiveness" and "completeness" - one in which agents exchange reasons/arguments, and one in which they don't. The policy that makes use of reasons/arguments is demonstrated to be more (but not fully) complete and more effective in identifying solutions. No formal proofs.
  • (See 'readme' file of 'aamas09modified' implementation for implementation notes.)

argmas 09 (in progress) paper / implementation

Pseudo-algorithm Explaining Argument Evaluation Implementation

(Procedural) Pseudo-algorithm explanation of my (Declarative) Prolog Argument Evaluation Procedure:
Given claim C;
Get a backward deduced support (consisting of private facts and assumptions) S of C;
Add C to Friends;
Add all sentences in S to Defences;
Get the set AttackArgs containing all arguments that attack the assumptions of S;
Append arguments in AttackArgs to Enemies;
Repeat: For each argument AttackArg in Enemies
If AttackArg contains a Culprit,
Remove AttackArg from Enemies;

Otherwise,
Remove AttackArg from Enemies;
Get a contrary D of some non-Defence assumption A in AttackArg;
Add A to Culprits;
If D is a Friend, skip forward to Repeat;
Get a non-Culprit-contaminated backward deduced support S of D;
Add D to Friends;
FilteredS = S with all Defences filtered out;
Append sentences in FilteredS to Defences;
Get the set AttackArgs containing all arguments that attack the assumptions of FilteredS;
Append arguments in AttackArgs to Enemies;
End Repeat
Explaining the implementation in terms of the Dispute Derivation Definition, the player choice and selection function is such that P is emptied immediately whenever a sentence is added to P , such that a set S in O is only addressed if P is empty. Hence, when attacking an assumption A in an element S of O, the check whether A is in P is not required (since P is empty). So, in conclusion, the implementation need not be modified despite the changes made to the Dispute Derivation Definition.

Explaining My Multiagent Dispute Derivation Procedure

Spent the last few weeks writing about my modified admissible belief dispute derivation procedure. Except for one proof outstanding, the first draft seems done.

Sunday 19 April 2009

Individual Transferable Quotas

An article (idea) I came across that makes use of distributed negotiation and social welfare concepts:

"Iceland has not quite proved that fish can sing, but it has shown they can continue to flourish, even when hunted by their main predator, man. Central to its policy are the individual transferable quotas given to each fishing boat for each species on the basis of her average catch of that fish over a three-year period. This settles the boat’s share of the total allowable catch of that fish for the entire country. The size of this total is announced each year on the basis of scientific advice from the independent Marine Research Institute.

Subject to certain conditions, quotas can be traded among boats. Bycatch must not be discarded. Instead it must be landed and recorded as part of that boat’s quota. If she has exhausted her quota, she must buy one from another boat, though 20% of a quota may be carried forward a year, and 5% of the next year’s quota can be claimed in advance..."


(Source: The Economist, January 3rd 2009)

Wednesday 11 March 2009

51, Argument-Based Machine Learning

Just read 'Argument-Based Machine Learning' by Ivan Bratko, Jure Zabkar and Martin Mozina. I think it's a chapter in a forthcoming book. I skipped the details in the middle but found the high-level explanations and examples in the beginning and end very understandable. A good reminder of machine (/inductive) learning and a nice presentation of how argumentation can aid in this.

Friday 6 March 2009

Tight Arguments

Not that it is very very important but I thought I would document a few small issues I have with the definition of "Tight arguments" in 'Dialectic proof procedures for assumption-based, admissible argumentation' (2005).

Firstly, "multi-sets". Are they necessary? Why?

Secondly,the "selection function". According to the way it is used and footnote 6 in particular, it is not really a function. It does not give the same output every time for a given input.

Thirdly, the "definition" seems a bit confused with the "construction".

Sunday 22 February 2009

50, Argumentation and Game Theory

Good paper (by Iyad Rahwan and Kate Larson) to come back to later when considering self-interested agents (i.e. those only interested in furthering individual goals) that argue strategically.

An agent type is such that an agent is capable of putting forward only a subset of all possible arguments in the argumentation framework. The notion of defeat (i.e. the defeat relation) is assumed common to all agents.

The kind of manipulation (lying) considered is that wherein agents hide some of their arguments. ("By refusing to reveal certain arguments, an agent might be able to break defeat chains in the argument framework, thus changing the final set of acceptable arguments.") An external verifier is assumed so that agents cannot create new arguments that they do not have in their argument set.

Reiterating, the key assumptions are:
  1. There is a common language for describing/understanding arguments.
  2. The defeat relation is common knowledge.
  3. The set of all possible arguments that might be presented is common knowledge.
  4. Agents do not know who has what arguments.
  5. Not all arguments may end up being presented by their respective agents.
Even with the above assumptions, the authors show that agents may still have incentive to manipulate the outcome by hiding arguments.

Tuesday 10 February 2009

Distributed Coordination Procedures

Interesting paragraph found in the 'Related Research' section of 'Collective Iterative Allocation: Enabling Fast and Optimal Group Decision Making' (2008) by Christian Guttman, Michael Georgeff and Iyad Rahwan:

"Distributed coordination procedures are often investigated using the Multi-Agent Systems (MAS) paradigm, because it makes realistic assumptions of the autonomous and distributed nature of the components in system networks [...]. Many MAS approaces do not adequately address the 'Collective Iterative Allocation' problem as they use each agent's models separately to improve coordination as opposed to all agents using their models together. That is, each agent uses its own models to decide on allocating a team to a task even if other, more knowledgeable agents would suggest better allocations..."

Saturday 7 February 2009

New Argument-Based Negotiation Policy

Setting:
- An agent is initially allocated a set of resources, possibly none.
- Resources are not divisible. An agent either has a particular resource or it does not.
- Resources are not shareable. No two agents have the same resource.
- An agent has at most one goal, possibly none.
- Goals are fulfilled by single resources.
- A certain goal may be fulfilled by a choice of different resources.
- A certain resource may fulfil a choice of different goals.

Allowed dialogues:
- Request dialogue between two agents (an initiator and a responder), each agent involved gives away at most one resource.
- Proposal dialogue between three or more agents (an initiator and a set of responders), each agent involved gives away at most one resource.
- A reason (Conclusion, Support) is provided with refusal/rejection only.

Pros of the policy:
- Computes the right solution (i.e. maximum number of agents fulfil their goal) when agents share all resource-goal 'fulfils' plans (from the outset).
- It is an 'any-time algorithm', i.e. resources are reallocated in such a way that the 'social welfare' does not decrease at any point. Also, the resource allocation can be modified as agents enter the system (without decreasing the 'social welfare' at any point).

Cons of the policy:
- Wasteful in number of requests/proposals. Instead, maybe, agents should ask initially, "do you have any resource that can fulfil my goal given that I know these resources (Rs) fulfil my goal?"

Friday 6 February 2009

Experimentation

Note to self: Test any (multi-agent resource reallocation) (argument-based) negotiation policies I propose/develop against alternatives proposed/developed by others, argument- or interest- based or otherwise, even if it means mapping/translating. Also, test outcomes (and maybe efficiency, complexity and such measures) of any negotiation policies I propose/develop against outcomes reached by a centralized (all-knowing, maybe) procedure.

49, An Empirical Study of Interest-Based Negotiation

Some notes noted whilst reading 'An Empirical Study of Interest-Based Negotiation' (2007) by Philippe Pasquier, Liz Sonenberg, Iyad Rahwan et al.

Assumptions of the paper (some which differ in my work (in progress)):
  • The resources are not shared and all the resources are owned. Agents also have a finite amount of "money", which is part of the resources and it is the only divisible one.
  • Uses numerical utility values ("costs", "benefits", "payments" etc based on this).
  • Negotiation restricted to 2 agents.
  • All agents have shared, common and accurate knowledge.
  • No overlap between agents' goals, plans, needed resources etc, which avoids the problems of positive and negative interaction between goals and conflicts for resources.
  • Both (i.e. all) agents use the same strategy. (Manipulable given that agents are out to maximise individual gains? Maybe but agents are assumed to be truthful.)

Additionally: "Agents do not have any knowledge about the partner's utility function (not even a probability distribution) and have erroneous estimations of the value of the resources not owned." It seems the primary benefit of IBN in this paper is to explore how agents can correct such erroneous information. (Agents trust each other not to lie about resource valuations.) A comparison is made between agents capable of bargaining only and agents capable of bargaining and reframing.

Content of the paper:

  • Introduction and Motivations
  • Agents with hierarchical goals (/plans)
  • The Negotiation Framework (Bargaining and Reframing Protocols/Strategies)
  • Simulation and Example
  • Experimental Results (Frequency and Quality of the deals; Negotiation complexity)
  • Conclusion and Future Work

Thursday 15 January 2009

48, A Multi-Agent Resource Negotiation for the Utilitarian Social Welfare

Very good point about compensatory side payments (and limits of agent budgets) on page 4 (Section 2 - Transaction).

Also, good summary of Toumas Sandholm's peer-to-peer negotiation work (rational and non-rational sequences of transactions, optima, etc) on page 4 (Section 2.1 - Convergence).

Nice conclusion to return back to.

48, A Multi-Agent Resource Negotiation for the Utilitarian Social Welfare

Quite related to my (work in progress) paper 'On the benefits of argumentation for negotiation'. The paper studies various "agent behaviours" in order to identify which one leads the most often (by means of local interactions between the agents) to an (global? T(ransaction)-global?) "optimal" resource allocation.

Main contribution of the paper: Providing/designing/exhibiting an (explicit negotiation) process that is able to converge, in practice, either towards a global optimum, or towards a near optimal solution (resource allocation). Also, to compare the social value of the resource allocation that is reached at the end of the negotiation process with the globally optimum social value (obtained by means of a 0-1 linear program).

Not sure how this work differs from Andersson & Sandholm's (1999) [47] except in considering incomplete 'contact networks'.

Assumptions of the paper:
- Bilateral Transactions (i.e. transactions betweens 2 agents only).
- Positive additive utility function which is comparable between agents.
- Resources are discrete, not shareable, not divisible, not consumable (static) and unique.
- No compensatory side payments.
- Sequential negotiations, i.e. only one agent at a time is able to (initiate) negotiation, though this does not seem significant in affecting the quality of the (social welfare of the) final allocation reached.
- (Implicitly:) Agents are truthful in reporting utilities. (This works in the case of "socially" transacting agents since agents are out to maximise social welfare and not individual welfare).
- All agents in a "contact network" (agent system) must use the same transaction type.

Content of the paper:
- Introduction (MARA problem; Contact network; Social welfare)
- Transaction (Convergence; Acceptability criteria; Transaction type; Communication Protocol)
- Experiments (Experiment protocol; Evaluation criteria; Optimal value determination)
- Social Gift (Behaviour variants; Behaviour efficiency; Proof of convergence; Egalitarian efficiency of the social gift)

Linking it to my work:
- It may be an idea to define argumentative negotiation policies that are based on "rational transactions" as well as "social transactions" (gifts, swaps and cluster-swaps) and to compare outcomes from each.
- What if agents could use gift, swap and cluster-swap transactions intermittently, as well as transactions involving multiple (3+) agents? Would that improve outcomes of negotiation (wrt the global optimum)? The former (mixing transaction types) is not considered in this paper. The latter (multi-agent transactions) is not possible (using the communication protocol of figure 1).
- Could interest-based negotiation (exchanging arguments etc) offer benefits in terms of path to solution in the original set-up as well as the two-additional set-ups described in the previous point?