Tuesday 30 December 2008

47, Time-Quality Tradeoffs in Reallocative Negotiation with Combinatorial Contract Types

Just read 'Time-Quality Tradeoffs in Reallocative Negotiation with Combinatorial Contract Types' (1999) by Martin Andersson and Tuomas Sandholm following on from reading [46] yesterday. Some thoughts:

Nice discussion of distributed reallocative negotiation "versus" (centralized) (combinatorial) auctions at the end of page 1 continuing on page 2.

Multiagent Travelling Salesman Problem (page 2). Interesting.

The contracting system between agents ("contract sequencing") described on pages 4 and 5 is in essence an exhaustive search. Naturally, slow and cumbersome. Multi-agent dialogues and interest-based negotiation could perhaps play a role here. Also, no "algorithm" (contracting system) provided for OCSM-contracts, or even, contracts of mixed/different types.

I like the presentation of the results, i.e. comparing the different contract types in terms of (i) the outcomes (solution quality in terms of social welfare) reached, and (ii) the number of contracts tried and performed before an (local) optimum is reached.

Monday 29 December 2008

Decentralized multiagent contracts

"Decentralized multiagent contracts can be implemented for example by circulating the contract message among the parties and agreeing that the contract becomes valid if every agent signs." ('Contract Types for Satisficing Task Allocation: I Theoretical Results' (1998) by T. W. Sandholm)

Alternatively to passing the contract around, something else to try: an agent noticing that a multiagent contract is necessary could broadcast a proposal of the multiagent contract to all prospective agents and if all agents agree, then the initiating agent could broadcast a confirmation to the recipients sealing the contract.

46, Contract Types for Satisficing Task Allocation: I Theoretical Results

Classification of contract types below taken from 'Contract Types for Satisficing Task Allocation: I Theoretical Results' (1998) by Tuoumas W. Sandholm. Very very important/related paper to keep referring back to. Will need to realise the OCSM-contract to achieve completeness in my work.

O-contract: one task given by an agent i to an agent j (+ contract price i pays to j for handling the task set).

C-contract: a cluster (more than 1) of tasks given by an agent i to an agent j (+ contract price i pays to j for handling the task set).

S-contract: swaps of tasks where agent i subcontracts a (single) task to agent j and vice-versa (+ amount i pays to j and amount j pays to i).

M-contract: A multi-agent contract involving at least three agents, wherein each agent involved gives away a single resource to another agent (+ payment).

Each contract type above is necessary (and avoids some of the local optima that the other three do not) but is not sufficient in and of itself for reaching the global optimum via "individually rational" contracts.

OCSM-contract: combines/merges characteristics of the above contract types into one contract type - where the ideas of the above four contract types can be applied simultaneously (atomically).

Sunday 28 December 2008

44, Towards Interest-Based Negotiation

Goal Arguments (which justify agents' adoption of certain goals) in the paper 'Towards Interest-Based Negotiation' (TIBN) by Iyad Rahwan et al take the form ((SuperG,B,SubG):G), i.e. an agent adopts (intends) a goal (G) because it believes G is instrumental to achieve its supergoal (SuperG), believes the context (B) which justifies G to be true and believes the plan (SubG) for achieving G to be achievable.

We identify here a few forms of attacks allowed (in TIBN) on these Goal Arguments for which we have equivalents in our multi-agent setting of 'On the Benefits of Argumentation for Negotiation' (OBAN).

- Attack in TIBN: For a Goal Argument ((SuperG,B,SubG):G), show ¬b where b is a belief in B, i.e. disqualifying a context condition.
- Similar attack in OBAN: Agent Y argues "I do not have resource R", where "you have resource R" is a belief agent X has (& utters) as part of either requesting R from Y or requesting R2 from Y. In the latter case, X's prior argument would be: "Y does not need R2 because Y has R (which alone is sufficient for fulfilling Y's goal)".

- Attack in TIBN: For a Goal Argument ((SuperG,B,SubG):G), show ¬p where p is a goal in SubG, i.e. a subgoal is unachievable.
- Similar attack in OBAN: Agent Y argues "I need to retain resource R (and hence your (sub)goal of obtaining R is unachievable)", where R is a resource agent X requests from Y.

- Attack: For a Goal Argument ((SuperG,B,SubG):R), show set of goals P such that achieve(P,G) where G is a goal in SuperG and R is not a goal in P, i.e. there is an alternative plan P which achieves the supergoal G and does not include R.
- Similar attacks in OBAN (in the case where R is a resource ("goal" in the language of TIBN) agent X requests from agent Y):
--- X argues "you do not need resource R (since you have a resource R2 that alone is sufficient for fulfilling your supergoal G)".
--- X argues "you do not need resource R (since I have a resource R2 that alone is sufficient for fulfilling your supergoal G and I will exchange with you R2 for R)".

Friday 26 December 2008

45, Mass argumentation and the semantic web

This paper ('Mass argumentation and the semantic web' by Iyad Rahwan) contains some useful links (footnotes 1-5 & 9) and a very useful background (Sections 1 and 2). The rest of the paper is not really (directly) related to my research though.

Content of the paper: "Argumentation theory - a crash course", "Arguing on today's web", "Arguing on the semantic web", "An infrastructure for unified semantic argument annotation".

Monday 22 December 2008

44, Towards Interest-Based Negotiation

Some thoughts following on from reading 'Towards Interest-Based Negotiation' (2003) by Iyad Rahwan et al with my aamas-submitted (not accepted) paper in mind:

The paper contains some nice ideas about goal selection which would (/could!) be useful in a (larger) context of multi-agent negotiation (/resource allocation) and in building a generative model (as I intend), but the work here leaves much unspecified and is not generative in and of itself. What is presented in Section 5 ("Dialogues about Goals") is a protocol. No policy or strategy is defined. This is left for future work. I will read the authors' newer paper ('An Empirical Study of Interest-Based Negotiation') to see if this is done and also any other related (later papers) by the authors.

In addition, the framework deals with agent systems consisting of two agents only.

Content of the paper: "Arguing about goals vs Arguing about beliefs", "Agents and goal support" (goals and beliefs/subgoals/supergoals/roles/adoption), "How to attack a goal" (attacking beliefs/subgoals/supergoals), "Dialogues about goals".

"Goal arguments" are presented to be of the form (H:G), where H is the triple support (SuperGoal,Beliefs,SubGoals) for G.

An interesting question, identified as outside the scope of this paper, is: How does an agent, given a top-level goal, generate (from the various options) the set of (sub-) goals to achieve? Suggested approaches: consider the costs of adopting different plans as well as the utilities of the goals achieved, or, identify the goal(s) with the strongest support.

27, On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation

Some thoughts following on from re-reading 'On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation' (2007) by Iyad Rahwan et al with my aamas-submitted (not accepted) paper in mind:
  • What is presented is a protocol and not a generative model as such.

  • The negotiation framework consists of (/is limited to) two agents.

  • Agents' preferences over (sets of) resources is specified as a (pre-given) numerical utility function. "Deals" between agents (to reallocate resources) make use of "side payments" based on this utility function.

  • The relationship "sub" (linking a goal to "sub" -goals and/or -resources needed to achieve it) seems shared between all agents (though agents have no prior knowledge of each other's main goals or preferences).

  • Much in this paper rests on the existence/allowance of "partial plans" (wherein leaf nodes may be goals as well as resources) and the setting of positive interaction between agents' "shared"/"common" goals such that an agent may benefit from a common goal (or sub-goal) achieved by the other agent.

EUMAS 08 Conference

Attended and presented at the EUMAS conference last week in Bath. Received some useful questions/feedback to think about, as follows:
  • The title ('On the benefits of argumentation for negotiation - preliminary version') is a bit misleading (given the narrow scope of this work). Also, should think about potential/real drawbacks of using argumentation for negotiation as well as its benefits, i.e. look at things more objectively.

  • Look at game-theoretic models. Contrast my work with theirs. Agents providing reasons/justifications with requests as in this paper would not be enough (in and of itself) to argue argumentation-based negotiation (ABN) over game-theoretic (GT) approaches. For example, the act of an agent providing a reason with a request may not always be advantageous; providing a reason could rule out an "offer" (in the mind of the recipient agent) that would otherwise have been acceptable. It may (also) not always be strategically advantageous for an agent to provide reasons with its requests since the recipient agent could use this against the requesting agent.

  • Agents providing reasons with dialogue moves doesn't increase the number of solutions possible unless agents provide their overlying goals with their reasons, like the "hammer and nail" example in an earlier paper. Otherwise agents are only justifying their dialogue moves.

  • How come reasons can be provided with a 'refuse' response but not with an 'accept'?

  • The work of Nicolas Hormazabal ('Trust aware negotiation') could be useful.

  • The presentation was perhaps overly simplistic. Looks a bit like I have created/used a problem/solution to justify argumentation and not the other way round, i.e. rather than creating/using an argumentative approach to solve a real problem. Also, sequences/concurrency of the dialogues: it was not clear from the presentation; it came across as though only one dialogue move/instance is made at a time in sequence regardless of the number of agents in the agent system.

  • A story from Cuba (spurred by my bilateral agent negotiation approach): each person prefers the house of his neighbour (only) over his own, creating a big circle of potential swaps. Eventually, (if/) once the circle is established/known, each person moves into the house of his neighbour resulting in a happier society. Point being: why not have everyone report their desires/preferences publicly and have the final result/allocation decided upon centrally like in an auction? Wouldn't that be easier?

Tuesday 16 December 2008

Microfinance / BRAC

I made a mental note yesterday to read up on ‘microfinance’ later today when, lo and behold, I conveniently stumbled upon this advert on my way to university earlier today (the ...’s identify parts that I have skipped):

“... Crowded into thousands of impoverished villages. Powerless and therefore poor. Poor and therefore powerless.

Where to begin?...

Start with one village. Identify the stakeholders, the believers, the leaders who could make change happen. That would be women.

First, disaster relief. Then... Then... Then microfinance, the indispensible multiplier, the key to scaling up.

Money to pay for a cow. Fresh milk and something wondrous called ‘income’. The cow became a dairy, then a milk distribution business...

Soon there was $5 billion in micro-loans, 7 million borrowers, 265,000 village organizations, 52,000 schools, 8.5 million jobs and new ventures in eight other Asian and African countries...”

(Source: ‘BRAC’ advert found in ‘The Economist’, www.brac.net)

If anyone has information or recommended reading/listening/viewing regarding microfinance/microloans, please share. Thanks.

Monday 15 December 2008

Tuesday 9 December 2008

Multiagent Resource Alloaction

Nice definition (taken from the '3rd MARA Get-Together: Workshop on Multiagent Resource Allocation'):

"the allocation of resources within a system of autonomous agents that not only have preferences over alternative allocations of resources but also actively participate in computing an allocation."

Friday 5 December 2008

Argument evaluation in implementations of negotiation policies

Finished modifying the implementations of the eumas- and aamas- negotiation policies to use (call) the new general argument evaluation procedure. Seems to be working fine (in Linux).

Monday 1 December 2008

Running the implementations on Windows

Still having problems compiling and running the eumas- and aamas- implementations in Windows. The problem seems to be with the Windows Sicstus prologbeans libraries but I am not entirely sure. Will leave this for know, complete the implementations for Linux and maybe return to this later.

Thursday 27 November 2008

Making the implementations portable

I realised yesterday that my implementations (of the eumas- and aamas- negotiation policies) are written such that they depend on the particular setup/configurations of my office computer and may not work elsewhere. The main problem is the way Sicstus Prolog is called from the Java (Jade) code. I think I have solved the problem such that my code should run on any Linux machine. However, as for Windows... having a few problems!

General argument evaluation procedure

I completed last week a general argument evaluation procedure (written in Prolog) that given a claim in the context of an Assumption-Based Argumentation (ABA) Framework (consisting of a language, inference rules, assumptions and contraries) checks whether the claim is acceptable according to the admissibility semantics and, if so, returns the defence set (which includes facts as well as assumptions).

I will now modify the implementations of the eumas- and aamas- negotiation policies to use (call) this procedure rather than CaSAPI.

The reason for implementing another argument evaluation procedure when CaSAPI already exists is that CaSAPI contains many features which I do not need and does not contain some features which I do need (e.g. returning a defence set that includes facts, as required in the kind of inter-agent communication setting I consider).

UCL-Imperial Workshop

Took part in a workshop organised by Francesca Toni (Imperial College) and Anthony Hunter (UCL) on Tuesday. The aim was for students (including me) to present and discuss work in an informal setting and to receive feedback. I have uploaded the slides I used to my home page.

Monday 17 November 2008

AAMAS paper implementation

Completed implementation of the policies described in the AAMAS 09 paper, modifying the paper along the way and not replicating the policies in the paper entirely. Also identified a few problem scenarios (see 'readme') that demonstrate why the policies are not 'complete'.

Plan now is to define a general argument evaluation procedure before proceeding with anything else.

Blocking Initiator Behaviour

Modified the Initiator Behaviour (for the EUMAS and AAMAS paper implementations) so that it does not 'block'. Example 5 of the EUMAS paper demonstrates the problem: The initiating condition test does not succeed first time round and blocking at this point could mean that the Initiator Behaviour is not scheduled again.

Ordering of requesting, responding and receiving a response

According to my current implementation, agents check that there are no incoming messages (requests or responses) before sending/initiating a request. That's fine. The problem scenario occurs when an agent receives a response and a request. Does the ordering of which it processes first matter? Yes, I think so. The agent should process any incoming responses (updating its belief base as necessary) first before responding to a request.

Submitted EUMAS paper

Completed revisions of the EUMAS paper ('On the benefits of argumentation - preliminary version') and submitted the revised version. Also, prepared an extended technical report and sent it off for uploading to the Department of Computing webpage.

Monday 10 November 2008

Speaking Technically

Read this book ('Speaking Technically: Handbook for Scientists, Engineers and Physicians on How to Improve Technical Presentations' by Sinclair Goodland) whilst preparing my EUMAS presentation. Quite useful. A good book to refer back to in future.

Thursday 6 November 2008

EUMAS paper revision

Had my EUMAS paper accepted, al7amdulillah. Spent this week making modifications based on reviewer comments and also working on the slides for the presentation in December.

AAMAS paper implementation

Began implementing the negotiation policies of the AAMAS paper last week. Completed the simple policy, albeit by means of a makeshift argument evaluation procedure. Still need to do the argument-based policy, including the updateBeliefs procedure.

Friday 17 October 2008


Quite good past few weeks. Completed an implementation and submitted a couple of papers (to EUMAS and AAMAS). Topic of the papers: On the benefits of argumentation for negotiation.

Need to start thinking about the finalisation report now!

Sunday 28 September 2008


Example! Example! Example! Give examples! Otherwise your work won't make sense to anyone except yourself.

Tuesday 19 August 2008

Resource Re-allocation using Jade/CaSAPI

Finally got a simple resource re-allocation procedure working using the Jade/CaSAPI combination. Now time to test it.

Thursday 31 July 2008

JADE, PrologBeans, CaSAPI

Almost done linking JADE (Java Agent DEvelopment Framework), PrologBeans (package for integrating Java and Prolog applications) and CaSAPI (Credulous and Sceptical Argumentation Prolog Implementation). Should be done by the end of the week, ready to put the three to collaborate use as of next week, God-willing :)

Tuesday 22 July 2008

PrologBeans - note to self 2

If I bind the atom "ag1" to the variable "Agent" using some Bindings instance ('b') and execute the query "has(Agent,Resource)" whilst providing 'b', then, as expected, if the query succeeds, a value is bound to the variable "Resource" in the returned QueryAnswer instance ('answer').

In addition, in 'answer', a value "ag1" is also bound to "Agent".

So, it seems 'answer' contains value bindings for output variables as well as input variables.

PrologBeans - note to self 1

In executing a query using a PrologSession instance, a QueryAnswer instance is returned.

If there was an error in execution, the QueryAnswer instance will return true for isError() but false for queryFailed().

Sunday 20 July 2008


I spent this week interfacing Prolog with JADE. Seems to be working roughly (i.e. starting up an agent, making a connection to a Prolog server, running a query and then shutting down the server). Need to test it thoroughly and make corrections early next week before proceeding.

Thursday 29 May 2008

JADE - Behaviour Scheduling and Execution

"An agent can execute several behaviours concurrently. However, it is important to note that the scheduling of behaviours in an agent is not pre-emptive (as for Java threads), but cooperative. This means that when a behaviour is scheduled for exection its action() method is called and runs until it returns. Therefore it is the programmer who defines when an agent switches from the execution of one behaviour to the execution to another."

(Source: developing multi-agent systems with JADE)

Monday 19 May 2008

AAMAS 2008

Just got back from a week in Portugal attending the 'Seventh International Conference on Autonomous Agents and Multiagent Systems' (AAMAS 2008). It was good to meet in person individuals whose work I have been following this past year-and-a-half. On the back of the conference and given that my focus is Argumentative Negotiation in Multiagent Systems, I find myself quite interested in the work of Iyad Rahwan (strategy etc), Peter Mcburney (dialogue games etc) and Elizabeth Black (enthymemes etc).

Thursday 8 May 2008

43.2, MAS: Rational Decision Making and Negotiation

Snippets taken from slides prepared and used by Ulle Endriss to teach a "Multiagent Systems: Rational Decision Making and Negotiation" course at Imperial College London in 2005

Game Theory: Given the rules of the "game" (the negotiation mechanism, the protocol), what strategy should a rational agent adopt?

Dominant Strategies: A strategy is called dominant iff, independently of what any of the other agents do, following that strategy will result in a larger payoff than any other strategy.

Nash Equilibria: A Nash equilibrium is a set of strategies, one for each agent, such that no agent could improve its payoff by unilaterally deviating from their assigned strategy.

Monday 5 May 2008

43.1, MAS: Rational Decision Making and Negotiation

Snippets taken from slides prepared and used by Ulle Endriss to teach a "Multiagent Systems: Rational Decision Making and Negotiation" course at Imperial College London in 2005

Welfare Economics: mathematical models of how the distribution of resources amongst agents affects social welfare.

Social Welfare: Utilitarian, Egalitarian, Nash Product, Pareto Optimality.

Thursday 1 May 2008

More questions to think about

Following on from the previous post, a couple of questions to think about:

Why negotiation rather than an auction-based approach?

Why argumentative negotiation rather than a bargaining approach?

Wednesday 30 April 2008

THE question: What is "the problem"?

(Note: Terms enclosed in quotation marks below most likely require background knowledge to be properly understood.)

My problem to solve is as follows: Given a number of "agents", each with a number of "resources" and each with "desires" for resources that it may or may not have, how do the agents exchange resources so that each has/obtains the resources it desires?

My solution: Allow the agents to "dialogue" between themselves by means of "argumentative negotiation". This will be achieved by modularising the problem into three inter-related sub-problems:

(1) Defining the "dialogue protocol" for argumentative negotiation, i.e. what are the "messages" that agents can exchange? how are the messages connected to form an argumentative negotiation dialogue? what messages initiate the dialogue? what messages terminate the dialogue? when is a terminating dialogue "successful" and when is it "unsuccessful"?

(2) Defining the "agent policies", i.e. what does the agent do with incoming messages? how does the agent know what messages are allowed to be sent at a certain stage according to the protocol? out of all these allowed messages, which one does the agent select to send at a certain "turn"? why/how?

(3) Defining the "knowledge-base" ("inference rules", allowed "assumptions", "contraries" of assumptions etc) which guides the decision-making of the agent at the policy/"strategy" level.

Note to self: Agent policies are to be defined so that they are largely independent of the definition of the various knowledge-base components they call upon when run. This will allow modifying the underlying knowledge of the agent (i.e. how desires are specified, how preferences over resources are specified, how incentive for exchanging resources is specified, and so on) hence allowing the behaviour of the agents and results of the negotiations to be modified without having to modify the dialogue protocol or agent policies.

Monday 21 April 2008

Jason - note to self 6

Given a hypothetical execution as follows:
(1) an agent ag1 delegates a goal !g1 to an agent ag2;
(2) ag2 begins executing !g1 (as delegated by ag1);
(3) ag2 reaches a stage in the plan body of !g1 where it has to execute !g2;
(4) ag2 is in the process of executing !g2 (called from !g1);
(5) ag2 receives an 'unachieve !g1' message from ag1.

Now, in processing the 'unachieve' message, !g1 would be removed from the current set of intentions. !g2 (and any goals subsequently called by !g2 that are currently in the stack of intentions) would also be removed.

This is because all those plans chosen to achieve sub-goals would be within the stack of plans forming the intention, on top of the plan for !g1, which would be dropped (and everything on top of it too, necessarily).

HOWEVER... the case for !! is different (recall that this allows the agent to achieve a goal in a SEPARATE intention). In this case, if we choose to achieve a goal in a separate intention, we lose track of why we were trying to achieve the goal. Needless to say, although this (!!) operator is provided because it can be useful, this is one of the reasons why it should be used with care.

Modifying step (5) as "ag2 receives an 'unachieve !g2' message from ag1". In this case, !g1 will also be dropped since the 'unachieve' uses the '.drop_desire' intention. '.drop_desire(g2)' "kills" the intention where !g2 appears and no failure event is produced. If we used 'fail_goal' instead of 'drop_desire', this allows a different behaviour. With this, the plan for !g2 would be terminated and a failure of !g1 (note it's g1 here) would be created.

Saturday 19 April 2008

Jason - note to self 5

When an agent receives an 'unachieve' message, it checks all its related desires (goal addition events in the set of events) and intentions (goal additions in the set of intentions or in suspended intentions) and drops those.

In a situation where the agent receives the 'unachieve' message BEFORE it has even the desire to achieve the goal in question, nothing will be done.

An 'unachieve' message, from ag1 to ag2 say, should "kill" only those intentions in ag2 created by 'achieve' messages from ag1 (i.e. agents should only be able to drop desires in other agents that are created from their 'achieve' messages). However, this is not presently the case.

Jason - note to self 4

Agent programming is intrinsically concurrent: Agents and the environment, unless otherwise specified, run asynchronously, so various executions of some given code are possible. The different executions represent different interleavings of the executions of the threads of each of the agents, which is determined by the operating system scheduling. With 2 processors (or in a distributed network), the possibilities increase further.

Summing up, the asynchronous execution and the way events are produced by Belief Update Function lead to very different executions. This however corresponds to "real life". In case a more detailed control is required, the execution should be synchronised. This can be done by the "synchronous" execution mode.

It is worth reading up on concurrency and threads. A good Distributed Systems book should suffice.

Wednesday 16 April 2008

Jason - note to self 3

It does not make much sense to have a plan to handle failures of a goal that has no plans. In other words, to have
-!g .....
+!g .....
does not make sense.

The "-!g" should be read like "in case the plan to achieve !g fails, do this".

Jason - note to self 2

Initial beliefs generate belief addition events when the agent code is run.

Jason - note to self 1

When an agent itself creates a goal the annotation "source(self)" is not added; and then there is no annotation. The solution is to add the source explicitly when the agent itself creates a goal.

Update and Abstract Plan

I haven't posted for a month so I thought I would do a quick update. I spent the previous two weeks setting up and playing with Jade and CaSAPI, and will be spending this week reading up on and playing with Jason.

As for an abstract plan of what I want to achieve, here it goes:

The plan is to present a framework that allows for agents to negotiate, primarily in resource re-allocation settings, making use of ideas from argumentation. This will involve (i) defining the agent mind, i.e. the internal reasoning of the agents; (ii) defining dialogue protocols that allow for argumentative negotiation between agents; (iii) defining strategies/policies that allow agents to generate moves and participate in accordance to the dialogue protocols; (iv) detailing the properties and results of the framework, and testing the hypothesis "argumentation allows for better deals to be reached, more efficiently, than negotiation that does not make use of argumentation".

Tuesday 11 March 2008

42, Using Enthymemes in an Inquiry Dialogue System

Contents of 'Using Enthymemes in an Inquiry Dialogue System' (2008), by Eilzabeth Black and Anthony Hunter

... Here we investigate the use of enthymemes in inquiry dialogues. For this, we propose a generative inquiry dialogue system and show how, in this dialogue system, enthymemes can be managed by the agents involved, and how common knowledge can evolve through dialogue.


... Here, we adapt and integrate [previous] proposals in order to define a new framework for generating inquiry dialogues that use enthymemes. The agents involved can send and receive enthymemes, they can query the other agent if they do not understand an enthymeme they have received, and they can update their perception of what can be used as common knowledge based on the information exchanged during the dialogue.

Logical Arguments


Representing Dialogues

Generating Dialogues

Properties of Dialogue System


41, Real Arguments are Approximate Arguments

Contents of 'Real Arguments are Approximate Arguments' (2007) by Anthony Hunter

... real arguments (i.e. arguments presented by humans) usually do not have enough explicitly presented premises for the entailment of the claim. This is because there is some common knowledge that can be assumed by a proponent of an argument and the recipient of it. This allows the proponent of an argument to encode an argument into a real argument by ignoring the common knowledge, and it allows a recipient of a real argument to decode it into an argument by drawing on the common knowledge. If both the proponent and recipient use the same common knowledge, then this process is straightforward. Unfortunately, this is not always the case, and raises the need for an approximation of the notion of an argument for the recipient to cope with the disparities between the different views on what constitutes common knowledge.


... Real arguments (i.e. those presented by people in general) are normally enthymemes... An enthymeme only explicitly represents some of the premises for entailing its claim...

Logical Argumentation

Approximate Arguments

Framework for Real Arguments

Generalizing Argument Trees

Sequences of Real Arguments

Decoding Enthymemes

Quality of Enthymemes


Monday 25 February 2008


A more informative, objective and academic approach to online debate and argumentation, imposing greater structure and rules on contributors and contributions. Check it out here.


Great idea and setup for fun competitive online arguing and debating, based on taking sides, voting and points accumulation. Check it out here.

Monday 11 February 2008

40, The Problem of Retraction in Critical Discussion

Contents of 'The Problem of Retraction in Critical Discussion' (2001), by Erik C. W. Krabbe

In many contexts a retraction of commitment is frowned upon... But on the other hand, the very goal of critical discussion - resolution of a dispute - involves a retraction, either of doubt, or of some expressed point of view...

1, The Problem

2, Ingredients for a Solution

(i) Among the rules of dialogue there must be a number of retraction rules that determine, in each dialogical situation, which retractions are permissible...

(ii) If a retraction is permissible the rule should stipulate what, exactly, are the consequences of the retraction...

(iii) ... there must be different stipulations for different types of dialogue.

(iv) ... Retraction rules should take into account the type of persuasion dialogue in which they are to function...

(v) Even within one type of dialogue, there is a need for distinct retraction rules for each type of commitment that occurs within dialogues of that type...

(vi) Another distinction between types of commitment is that between light-side and dark-side commitments...

(vii) ... have a number of different models of dialogue for different types and situations...

(viii) ... it is advisable, in model construction, to make retraction just a bit costly. As was noted above, one might stipulate that retractions lead to further retractions...

3, A Survey of Commitment Types and Constraints on Retraction
- Assertions
- Concessions (Presumptions, Fixed Concessions, Free Concessions)

4, On how to run the hare and hunt with the hounds

Friday 8 February 2008

39, The Eightfold Way of Deliberation Dialogue

Contents of 'The Eightfold Way of Deliberation Dialogue' (2007), Peter McBurney, David Hitchcock, Simon Parsons

"Deliberation dialogues occur when two or more participants seek to jointly agree on an action or a course of action in some situation..."

1, Introduction

2, Deliberation Dialogues

3, A Formal Model of Deliberations

The following types of sentences are defined: Actions, Goals, Constraints, Perspectives, Facts, Evaluations

The presented formal dialogue model consists of eight stages: Open, Inform, Propose, Consider, Revise, Recommend, Confirm, Close

4, Locutions for a Deliberation Dialogue Protocol

The permissible locutions in the dialogue game are as follows: open_dialogue, enter_dialogue, propose, assert, prefer, ask_justify, move, reject, retract, withdraw_dialogue

5, Example

6, Assessment of the DDF Protocol: Human Dialogues, Deliberation Process, Deliberation Outcomes

7, Discussion: Contribution, Related Work, Future Research

8, Appendix: Axiomatic Semantics

Thursday 3 January 2008

The accidental innovator

Don't you just love that light bulb moment? You know what I mean: There you are, sitting at your desk for days on end trying to crack a problem or generate some great idea, without success, and then, when you are completely detached and least expect it, from a source unimaginable... Eureka! I am sure we have all experienced it at some point or another. Evan Williams, the founder of Blogger and Twitter, certainly has. I read a nice article earlier today about how he stumbled upon his successes and the following three insights: "First, that genuinely new ideas are, well, accidentally stumbled upon rather than sought out; second, that new ideas are by definition hard to explain to others, because words can express only what is already known; and third, that good ideas seem obvious in retrospect."

Check out the article, entitled 'The accidental innovator', which featured in the December 22nd 2007 issue of 'The Economist'. It is a nice read.