Tuesday, 30 January 2007

Literature Discussion

Thoughts following on from 29 January’s supervisor meeting.

Discussion regarding ‘Games that agents play’

In what sense is the formalism “logic-based”? What does it mean to be “logic-based”?

Firstly you have informal dialogue. An example of such would be a natural discussion between two people; there are no real rules that govern the interaction and it would most likely occur in an ad hoc manner.

Moving on from this, there is formal dialogue, and this is normally what is meant by “logic-based”; formal in the sense that there are rules that govern the interaction.

The next step up (and final step) is algorithm-based dialogue (i.e. an implementable formalism). This is in essence the reason for computer scientists formalising (“logicising”) dialogue, since our objectives are not primarily philosophical.

With so much undefined, should they be allowed to make a claim that “the formalism is computational”?

Not really, but unfortunately this is the case with a lot of papers. It should not be enough to solely present a framework and say that it is “logic-based” and that it is “the” solution. You need to prove that it works (which is the essence of making it logic-based) and that it is “a” solution. Further, you need to show (in some way?) that it is better than other frameworks.

They say, the framework “is potentially generative. For it to be so, we would need to have procedures which could automatically generate each of the type of dialogues if and when required.” The proposal is so abstract, and the work yet to do is much greater than the work that remains, so, again, can they make such claims?

Again, no. There needs to be “proof”. The realisation/implementation needs to be demonstrated.

They speak of Reed’s formalism [Dialogue frames in agent communication] as “descriptive rather than generative, in that it does not specify the forms of utterances, nor the rules which govern their formation, issuances and effects”. But to be honest, a large part of this can be attributed to this paper also, even though they say “the Agent Dialogue Framework we have proposed… is potentially generative as well as descriptive”.

Yes. A lot of papers that you will find will be wordy despite assurances to the contrary.

Discussion regarding ‘Automated Negotiation’ and the opening two chapters of ‘Argumentation-Based Negotiation’

What exactly is a game-theoretic/heuristic approach to negotiation?

In such approaches there are many assumptions lend themselves well to competitive games such as chess and the prisoner’s dilemma, where agents are self-interested, have full knowledge of the world and know what they want (i.e. to win the (well-defined) game). However, there would definitely be occasion where such characteristics do not apply well to negotiation/argumentation settings. Since agents may be altruistic, may not have full knowledge of the world, resources and other agents, and where agents may not be able to correctly classify what it is that they want until they go through some rounds of dialogue.

In a particular negotiation – what would be going on? Can the work of Sadri et al. or that of my Master’s project be classified as game-theoretic?

In the sense that agents take turns, make strategic moves and have goals, yes it could be considered game-theoretic. However, the basic assumptions of game-theory do not apply as discussed above.

“Argumentation-based approaches allow for more sophisticated forms of interaction than their game-theoretic and heuristic counterparts.” How and why?

Argumentation would not be limited to making/accepting/rejecting proposals and counter-proposals.

Do the attacks on game-theory by the argumentation community have solid basis? Would a game-theorist have some sort of response to each of the attacks? What would they say?

Naturally they would defend their work in an almost religious manner. It would be worth reading some recent literature to get a better understanding of game theory and to be able to better justify your approach as opposed to a game-theoretic approach – however, not to spend too much time on it.

Discussion regarding most argumentation literature

The communication (negotiation, argumentation, whatever) in most literature is assumed to be between two agents. This is in a sense cheating. A lot of the really interesting and challenging problems lie in multiple agent dialogue.

There could be problems during the dialogue. For example, suppose two agents are negotiating and the first of the two makes a commitment to the other. But then during the course of the dialogue, a third agent comes in and offers the first agent a better deal. What should the first agent do?

There could also be problems after a particular dialogue has closed. For example, suppose two agents have ended an argumentation and mutually agreed on a particular standpoint for a proposition. Suppose then that a third agent correctly convinces the first agent of a conflicting standpoint. Now the second agent unknowingly holds an incorrect standpoint. Does this need to be corrected? How? Perhaps the first agent has a duty to inform it? Perhaps there is a central blackboard for corrections? Perhaps a global announcement from the first agent is due?

There is an abundance of work on one-to-many (centralised) negotiation like auctioning, as described in [1] for example, however, many-to-many (distributed) argumentative negotiation seems to be an open challenging area to head towards.

References
[1] P. Torroni, F. Toni. Extending a logic-based one-to-one negotiation framework to one-to-many negotiation. 2002

No comments: