Wednesday 30 April 2008

THE question: What is "the problem"?

(Note: Terms enclosed in quotation marks below most likely require background knowledge to be properly understood.)

My problem to solve is as follows: Given a number of "agents", each with a number of "resources" and each with "desires" for resources that it may or may not have, how do the agents exchange resources so that each has/obtains the resources it desires?

My solution: Allow the agents to "dialogue" between themselves by means of "argumentative negotiation". This will be achieved by modularising the problem into three inter-related sub-problems:

(1) Defining the "dialogue protocol" for argumentative negotiation, i.e. what are the "messages" that agents can exchange? how are the messages connected to form an argumentative negotiation dialogue? what messages initiate the dialogue? what messages terminate the dialogue? when is a terminating dialogue "successful" and when is it "unsuccessful"?

(2) Defining the "agent policies", i.e. what does the agent do with incoming messages? how does the agent know what messages are allowed to be sent at a certain stage according to the protocol? out of all these allowed messages, which one does the agent select to send at a certain "turn"? why/how?

(3) Defining the "knowledge-base" ("inference rules", allowed "assumptions", "contraries" of assumptions etc) which guides the decision-making of the agent at the policy/"strategy" level.

Note to self: Agent policies are to be defined so that they are largely independent of the definition of the various knowledge-base components they call upon when run. This will allow modifying the underlying knowledge of the agent (i.e. how desires are specified, how preferences over resources are specified, how incentive for exchanging resources is specified, and so on) hence allowing the behaviour of the agents and results of the negotiations to be modified without having to modify the dialogue protocol or agent policies.

Monday 21 April 2008

Jason - note to self 6

Given a hypothetical execution as follows:
(1) an agent ag1 delegates a goal !g1 to an agent ag2;
(2) ag2 begins executing !g1 (as delegated by ag1);
(3) ag2 reaches a stage in the plan body of !g1 where it has to execute !g2;
(4) ag2 is in the process of executing !g2 (called from !g1);
(5) ag2 receives an 'unachieve !g1' message from ag1.

Now, in processing the 'unachieve' message, !g1 would be removed from the current set of intentions. !g2 (and any goals subsequently called by !g2 that are currently in the stack of intentions) would also be removed.

This is because all those plans chosen to achieve sub-goals would be within the stack of plans forming the intention, on top of the plan for !g1, which would be dropped (and everything on top of it too, necessarily).

HOWEVER... the case for !! is different (recall that this allows the agent to achieve a goal in a SEPARATE intention). In this case, if we choose to achieve a goal in a separate intention, we lose track of why we were trying to achieve the goal. Needless to say, although this (!!) operator is provided because it can be useful, this is one of the reasons why it should be used with care.

Modifying step (5) as "ag2 receives an 'unachieve !g2' message from ag1". In this case, !g1 will also be dropped since the 'unachieve' uses the '.drop_desire' intention. '.drop_desire(g2)' "kills" the intention where !g2 appears and no failure event is produced. If we used 'fail_goal' instead of 'drop_desire', this allows a different behaviour. With this, the plan for !g2 would be terminated and a failure of !g1 (note it's g1 here) would be created.

Saturday 19 April 2008

Jason - note to self 5

When an agent receives an 'unachieve' message, it checks all its related desires (goal addition events in the set of events) and intentions (goal additions in the set of intentions or in suspended intentions) and drops those.

In a situation where the agent receives the 'unachieve' message BEFORE it has even the desire to achieve the goal in question, nothing will be done.

An 'unachieve' message, from ag1 to ag2 say, should "kill" only those intentions in ag2 created by 'achieve' messages from ag1 (i.e. agents should only be able to drop desires in other agents that are created from their 'achieve' messages). However, this is not presently the case.

Jason - note to self 4

Agent programming is intrinsically concurrent: Agents and the environment, unless otherwise specified, run asynchronously, so various executions of some given code are possible. The different executions represent different interleavings of the executions of the threads of each of the agents, which is determined by the operating system scheduling. With 2 processors (or in a distributed network), the possibilities increase further.

Summing up, the asynchronous execution and the way events are produced by Belief Update Function lead to very different executions. This however corresponds to "real life". In case a more detailed control is required, the execution should be synchronised. This can be done by the "synchronous" execution mode.

It is worth reading up on concurrency and threads. A good Distributed Systems book should suffice.

Wednesday 16 April 2008

Jason - note to self 3

It does not make much sense to have a plan to handle failures of a goal that has no plans. In other words, to have
-!g .....
without
+!g .....
does not make sense.

The "-!g" should be read like "in case the plan to achieve !g fails, do this".

Jason - note to self 2

Initial beliefs generate belief addition events when the agent code is run.

Jason - note to self 1

When an agent itself creates a goal the annotation "source(self)" is not added; and then there is no annotation. The solution is to add the source explicitly when the agent itself creates a goal.

Update and Abstract Plan

I haven't posted for a month so I thought I would do a quick update. I spent the previous two weeks setting up and playing with Jade and CaSAPI, and will be spending this week reading up on and playing with Jason.

As for an abstract plan of what I want to achieve, here it goes:

The plan is to present a framework that allows for agents to negotiate, primarily in resource re-allocation settings, making use of ideas from argumentation. This will involve (i) defining the agent mind, i.e. the internal reasoning of the agents; (ii) defining dialogue protocols that allow for argumentative negotiation between agents; (iii) defining strategies/policies that allow agents to generate moves and participate in accordance to the dialogue protocols; (iv) detailing the properties and results of the framework, and testing the hypothesis "argumentation allows for better deals to be reached, more efficiently, than negotiation that does not make use of argumentation".