Notes taken from 'BDI Agents: From Theory to Practive' (1995), by Anand S. Rao and Michael P. Georgeff
3, Decision Trees to Possible Worlds
4, BDI Logics
The above transformation [of Section 3] provides the basis for developing a logical theory for deliberation by agents that is compatible with quantitative decision theory in those cases where we have good estimates for probabilities and payoffs. However, it does not address the case in which we do not have such estimates, nor does it address the dynamic aspects of deliberation, particularly those concerning commitment to previous decisions.
We begin by abstracting the model given above to reduce probabilities and payoffs to dichotomous (0-1) values. That is, we consider propositions to be either believed or not believed, desired or not desired, and intended or not intended, rather than ascribing continuous measures to them. Within such a framework, we first look at the static properties we would want of BDI systems and next their dynamic properties...
Static Constraints: The static relationships among the belief-, desire-, and intention-accessible worlds can be examined along two different dimensions, one with respect to the sets of possible worlds and the other with respect to the structure of the possible worlds...
Dynamic Constraints: As discussed earlier, an important aspect of a BDI architecture is the notion of commitment to previous decisions. A commitment embodies the balance between the reactivity and goal-directedness of an agent-oriented system. In a continuously changing environment, commitment lends a certan sense of stability to the reasoning process of an agent. This results in savings in computational effort and hence better overall performance.
A commitment usually has two parts to it: one is the condition that the agent is committed to maintain, called the commitment condition, and the second is the condition under which the agent gives up the commitment, called the termination condition. As the agent has no direct control over its beliefs and desires, there is no way that it can adopt or effectively realize a commitment strategy over these attitudes. Thus we restrict the commitment condition to intentions...
5, Abstract Architecture
6, Applications
7, Comparison and Conclusion
... While the earlier formalisms present a particular set of semantic constraints or axioms as being the formalization of a BDI agent, we adopt the view that one should be able to choose an appropriate BDI system for an application based on the rational behaviours required for that application. As a result, following the modal logic tradition, we have discussed how one can categorize different combinations of interactions between beliefs, desires, and intentions...
Showing posts with label knowledge representation. Show all posts
Showing posts with label knowledge representation. Show all posts
Monday, 12 November 2007
35.1-2, BDI Agents: From Theory to Practice
Notes taken from 'BDI Agents: From Theory to Practive' (1995), by Anand S. Rao and Michael P. Georgeff
1, Introduction
... A number of different approaches have emerged as candidates for the study of agent-oriented systems [] One such architecture views the system as a rational agent having certain mental attitudes of Belief, Desire and Intention (BDI), representing, respectively, the information, motivational, and deliberative states of the agent. These mental attitudes determine the system's behaviour and are critical for achieving adequate or optimal performance when deliberation is subject to resource bounds...
2, The System and its Environment
... First [] it is essential that the sytem have information on the state of the environment. But as this cannot necessarily be determined in one sensing action [] it is necessary that there be some component of system state that represents this information and which is updated after each sensing action. We call such a component the system's beliefs... Thus, beliefs can be viewed as the informative component of system state.
Second, it is necessary that the system also have information about the objectives to be accomplished or, more generally, what priorities or payoffs are associated with the various current objectives []... We call this component the system's desires, which can be thought of as representing the motivational state of the system.
... We seem caught on the horns of a dilemma: reconsidering the choice of action at each step is potentially too expensive and the chosen action possibly invalid, whereas unconditional commitment to the chosen course of action can result in the system failing to achieve its objectives. However, assuming that potentially significant changes can be determined instantaneously, it is possible to limit the frequency of reconsideration and thus achieve an appropriate balance between too much reconsideration and not enough []. For this to work, it is necessary to include a component of system state to represent the currently chosen course of action; that is, the output of the most recent call to the selection function. We call this addtional state component the system's intentions. In essence, the intentions of the system capture the deliberative component of the system.
1, Introduction
... A number of different approaches have emerged as candidates for the study of agent-oriented systems [] One such architecture views the system as a rational agent having certain mental attitudes of Belief, Desire and Intention (BDI), representing, respectively, the information, motivational, and deliberative states of the agent. These mental attitudes determine the system's behaviour and are critical for achieving adequate or optimal performance when deliberation is subject to resource bounds...
2, The System and its Environment
... First [] it is essential that the sytem have information on the state of the environment. But as this cannot necessarily be determined in one sensing action [] it is necessary that there be some component of system state that represents this information and which is updated after each sensing action. We call such a component the system's beliefs... Thus, beliefs can be viewed as the informative component of system state.
Second, it is necessary that the system also have information about the objectives to be accomplished or, more generally, what priorities or payoffs are associated with the various current objectives []... We call this component the system's desires, which can be thought of as representing the motivational state of the system.
... We seem caught on the horns of a dilemma: reconsidering the choice of action at each step is potentially too expensive and the chosen action possibly invalid, whereas unconditional commitment to the chosen course of action can result in the system failing to achieve its objectives. However, assuming that potentially significant changes can be determined instantaneously, it is possible to limit the frequency of reconsideration and thus achieve an appropriate balance between too much reconsideration and not enough []. For this to work, it is necessary to include a component of system state to represent the currently chosen course of action; that is, the output of the most recent call to the selection function. We call this addtional state component the system's intentions. In essence, the intentions of the system capture the deliberative component of the system.
Friday, 9 November 2007
Agent's Goals
Taken from 'On the Generation of Bipolar Goals in Argumentation-Based Negotiation' (2005), by Leila Amgoud and Souhila Kaci
Typology of Goals
Recent studies on psychology claim that goals are bipolar and there are at least two kinds of goals: the positive goals representing what the agent wants to achieve and the negative goals representing what the agent rejects.
Beware that, positive goals do not just mirror what is not rejected since a goal which is not rejected is not necessarily pursued. This category of goals which are neither negative nor positive are said to be in abeyance.
Note however that positive and negative goals are related by a coherence condition which says that what is pursued should be among what is not rejected.
The Origins of Goals
Agent's goals come generally from two different sources:
- from beliefs that justify their existence. So, the agent believes that the world is in a state that warrants the existence of its goals. These goals are called the initial ones or also conditional goals. They are conditional because they depend on the beliefs.
- an agent can adopt a goal because it allows him to achieve an initial goal. These are called sub-goals or adopted goals.
A conditional rule is an expression of the form
R: c1 & ... & cn => g,
which expresses the fact that if c1 ... cn are true then the agent will have the goal g.
A planning rule is an expression of the form
P: g1 & ... & gn |-> g,
which means that the agent believes that if he realizes g1, ..., gn then he will be able to achieve g.
Typology of Goals
Recent studies on psychology claim that goals are bipolar and there are at least two kinds of goals: the positive goals representing what the agent wants to achieve and the negative goals representing what the agent rejects.
Beware that, positive goals do not just mirror what is not rejected since a goal which is not rejected is not necessarily pursued. This category of goals which are neither negative nor positive are said to be in abeyance.
Note however that positive and negative goals are related by a coherence condition which says that what is pursued should be among what is not rejected.
The Origins of Goals
Agent's goals come generally from two different sources:
- from beliefs that justify their existence. So, the agent believes that the world is in a state that warrants the existence of its goals. These goals are called the initial ones or also conditional goals. They are conditional because they depend on the beliefs.
- an agent can adopt a goal because it allows him to achieve an initial goal. These are called sub-goals or adopted goals.
A conditional rule is an expression of the form
R: c1 & ... & cn => g,
which expresses the fact that if c1 ... cn are true then the agent will have the goal g.
A planning rule is an expression of the form
P: g1 & ... & gn |-> g,
which means that the agent believes that if he realizes g1, ..., gn then he will be able to achieve g.
Tuesday, 4 September 2007
Knowledge Representation and Reasoning
"Knowledge representation and reasoning is the area of Artificial Intelligence (AI) concerned with how knowledge can be represented symbolically and manipulated in an automated way by reasoning programs. More informally, it is the part of AI that is concerned with thinking, and how thinking contributes to intelligent behaviour... in the field of knowledge representation and reasoning we focus on the knolwedge, not on the knower..."
Source: 'Knowledge Representation and Reasoning' (2004), Ronald Brachman and Hector Levesque
Source: 'Knowledge Representation and Reasoning' (2004), Ronald Brachman and Hector Levesque
Subscribe to:
Posts (Atom)