Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari
4, Collaborative Agents
Collaborative MAS: A collaborative Multi-Agent System will be a pair of a set of argumentative BDI agents and a set of shared beliefs.
(Negotiating Beliefs; Proposals and Counterproposals; Side-effects; Failure in the Negotiation)
5, Communication Languages
(Interaction Protocol; Interaction Language; Negotiation Primitives)
6, Conclusions and Future Work...
Tuesday, 26 June 2007
26.3, Argument-based Negotiation among BDI Agents
Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari
3, Planning and Argumentation
Argumentative BDI Agent: The agents desires D will be represented by a set of literals that will also be called goals. A subset of D will represent a set of committed goals and will be referred to as the agent intentions... The agent's beliefs will be represented by a restricted Defeasible Logic Program... Besides its beliefs, desires and intentions, an agent will have a set of actions that it may use to change its world.
Action: An action A is an ordered triple (P, X, C), where P is a set of literals representing preconditions for A, X is a consistent set of literals representing consequences of executing A, and C is a set of constraints of the form not L, where L is a literal.
Applicable Action...
Action Effect...
3, Planning and Argumentation
Argumentative BDI Agent: The agents desires D will be represented by a set of literals that will also be called goals. A subset of D will represent a set of committed goals and will be referred to as the agent intentions... The agent's beliefs will be represented by a restricted Defeasible Logic Program... Besides its beliefs, desires and intentions, an agent will have a set of actions that it may use to change its world.
Action: An action A is an ordered triple (P, X, C), where P is a set of literals representing preconditions for A, X is a consistent set of literals representing consequences of executing A, and C is a set of constraints of the form not L, where L is a literal.
Applicable Action...
Action Effect...
Labels:
argumentation,
computing,
dialogues,
logic,
multiagent systems,
negotiation
26.1-2, Argument-based Negotiation among BDI Agents
Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari
"... Here we propose a deliberative mechanism for negotiation among BDI agents based in Argumentation."
1, Introduction
In a BDI agent, mental attitudes are used to model its cognitive capabilities. These mental attitudes include Beliefs, Desires and Intentions among others such as preferences, obligations, commitments, etc. These attitudes represent motivations of the agent and its informational and deliberative states which are used to determine its behaviour.
Agents will use a formalism based in argumentation in order to obtain plans for their goals represented by literals. They will begin by trying to construct a warrant for the goal. That might not be possible because some need literals are not available. The agent will try to obtain those missing literals, regarded as subgoals, by executing the actions it has available. When no action can achieve the subgoals the agent will request collaboration...
2, The Construction of a BDI Agent's Plan
Practical reasoning involves two fundamental processes: decide what goals are going to be pursued, and choose a plan on how to achieve them... The selected options will make up the agent's intentions; they will also have an influence on its actions, restrict future practical reasoning, and persist (in some way) in time...
... Abilities are associated with actions that have preconditions and consequences...
"... Here we propose a deliberative mechanism for negotiation among BDI agents based in Argumentation."
1, Introduction
In a BDI agent, mental attitudes are used to model its cognitive capabilities. These mental attitudes include Beliefs, Desires and Intentions among others such as preferences, obligations, commitments, etc. These attitudes represent motivations of the agent and its informational and deliberative states which are used to determine its behaviour.
Agents will use a formalism based in argumentation in order to obtain plans for their goals represented by literals. They will begin by trying to construct a warrant for the goal. That might not be possible because some need literals are not available. The agent will try to obtain those missing literals, regarded as subgoals, by executing the actions it has available. When no action can achieve the subgoals the agent will request collaboration...
2, The Construction of a BDI Agent's Plan
Practical reasoning involves two fundamental processes: decide what goals are going to be pursued, and choose a plan on how to achieve them... The selected options will make up the agent's intentions; they will also have an influence on its actions, restrict future practical reasoning, and persist (in some way) in time...
... Abilities are associated with actions that have preconditions and consequences...
Labels:
argumentation,
computing,
dialogues,
logic,
multiagent systems,
negotiation
Friday, 22 June 2007
Requesting
Taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge
Request speech acts (directives) are attempts by a speaker to modify the intentions of the hearer. However, we can identify at least two different types of requests:
- Requests to bring about some state of affairs: An example of such a request would be when one agent said "Keep the door closed." We call such requests "requests-that".
- Requests to perform some particular action: An example of such a request would be when one agent said "Lock the door." We call such requests "requests-to".
Requests-that are more general than requests-to. In the former case (requests-that), the agent communicates an intended state of affairs, but does not communicate the means to achieve this state of affairs... In the case of requesting to, however, the agent does not communicate the desired state of affairs at all. Instead, it communicates an action to be performed, and the state of affairs to be acieved lies implicit within the action that was communicated...
Request speech acts (directives) are attempts by a speaker to modify the intentions of the hearer. However, we can identify at least two different types of requests:
- Requests to bring about some state of affairs: An example of such a request would be when one agent said "Keep the door closed." We call such requests "requests-that".
- Requests to perform some particular action: An example of such a request would be when one agent said "Lock the door." We call such requests "requests-to".
Requests-that are more general than requests-to. In the former case (requests-that), the agent communicates an intended state of affairs, but does not communicate the means to achieve this state of affairs... In the case of requesting to, however, the agent does not communicate the desired state of affairs at all. Instead, it communicates an action to be performed, and the state of affairs to be acieved lies implicit within the action that was communicated...
25.6-9, Reasoning About Rational Agents
Notes taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge
6, Collective Mental States
(Mutual Beliefs, Desires, and Intentions; Mutual Mental States and Teamwork)
7, Communication
(Speech Acts; Attempts; Informing; Requesting; Composite Speech Acts)
8, Cooperation
(What Is Cooperative Problem Solving?; Recognition; Team Formation; Plan Formation)
9, Logic and Agent Theory
(Specification; Implementation; Verification)
6, Collective Mental States
(Mutual Beliefs, Desires, and Intentions; Mutual Mental States and Teamwork)
7, Communication
(Speech Acts; Attempts; Informing; Requesting; Composite Speech Acts)
8, Cooperation
(What Is Cooperative Problem Solving?; Recognition; Team Formation; Plan Formation)
9, Logic and Agent Theory
(Specification; Implementation; Verification)
Thursday, 21 June 2007
25.4-5, Reasoning About Rational Agents
Notes taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge
4, LORA Defined
(Syntax; Semantics; Derived Connectives; Some Properties of LORA)
5, Properties of Rational Agents
BDI Correspondence Theory
Pairwise Interactions between Beliefs, Desires and Intentions
(Int i X) => (Des i X): If an agent intends something, then it desires it. Intuitively, this schema makes sense for rational agents...
(Des i X) => (Int i X): If an agent desires something, then it intends it. In other words, an agent intends all its options... This formula does not appear to capture any interesting properties of agents.
(Bel i X) => (Des i X): This is a well-known, if not widely-admired property of agents known as realism ("accepting the inevitable"). For example, suppose I believe that the sun will definitely rise tomorrow. Then, one could argue, it makes no sense for me to desire that the sun will not rise... As a property of rational agents, realism seems too strong...
(Des i X) => (Bel i X): If an agent desires something, then it believes it. To give a concrete example, suppose I desire I am rich: should I then believe I am rich? Clearly not.
(Int i X) => (Bel i x): If an agent intends something, then it believes... Suppose I have an intention to write a book; does this imply I believe I will write it? One could argue that, in general, it is too strong a requirement for a rational agent... While I certainly believe it is possible that I will succeed in my attention to write the book, I do not believe it is inevitable that I will do so...
(Bel i X) => (Int i X): If an agent believes something, then it intends it. Again, this is a kind of realism property... Suppose that I believe that X is true: should I then adopt X as an intention? Clearly not. This would imply that I would choose and commit to everything that I believed was true. Intending something implies selecting it and committing resources to achieving it. It makes no sense to suggest committing resources to achieving something that is already true.
These formulae are a useful starting point for our analysis of the possible relationships that exist among the three components of an agent's mental state. However, it is clear that a finer-grained analysis of the relationships is likely to yield more intuitively reasonable results.
Varieties of Realism
(Int i X) => ¬(Des i ¬X)
(Des i X) => ¬(Int i ¬X)
These properties say that an agent's intentions are consistent with its desires, and conversely, its desires are consistent with its intentions... These schemas, which capture intention-desire consistency, appear to be reasonable properties to demand of rational agents in some, but not all circumstances... Under certain circumstances, it makes sense for an agent to reconsider its intentions - to deliberate over them, and possibly change focus. This implies entertaining options (desires) that are not necessarily consistent with its current intentions...
(Bel i X) => ¬(Des i ¬X)
(Des i X) => ¬(Bel i ¬X)
These schemas capture belief-desire consistency. As an example of the first, if I believe it is raining, there is no point in desiring it is not raining, since I will not be able to change what is already the case. As for the second, on first consideration, this schema seeems unreasonable. For example, I may desire to be rich while believing that I am not currently rich. But when we distinguish between present-directed and future-directed desires and beliefs, the property makes sense for rational agents...
Systems of BDI Logic
The Side-Effect Problem
The side-effect problem is illustrated by the following scenario: "Janine intends to visit the dentist in order to have a tooth pulled. She is aware that as a consequence of having a tooth pulled, she will suffer pain. Does Janine intend to suffer pain?"
... It is generally agreed that rational agents do not have to intend the consequences of their intentions. In other words, Janine can intend to have a tooth pulled, believing that this will cause pain, without intending to suffer pain.
4, LORA Defined
(Syntax; Semantics; Derived Connectives; Some Properties of LORA)
5, Properties of Rational Agents
BDI Correspondence Theory
Pairwise Interactions between Beliefs, Desires and Intentions
(Int i X) => (Des i X): If an agent intends something, then it desires it. Intuitively, this schema makes sense for rational agents...
(Des i X) => (Int i X): If an agent desires something, then it intends it. In other words, an agent intends all its options... This formula does not appear to capture any interesting properties of agents.
(Bel i X) => (Des i X): This is a well-known, if not widely-admired property of agents known as realism ("accepting the inevitable"). For example, suppose I believe that the sun will definitely rise tomorrow. Then, one could argue, it makes no sense for me to desire that the sun will not rise... As a property of rational agents, realism seems too strong...
(Des i X) => (Bel i X): If an agent desires something, then it believes it. To give a concrete example, suppose I desire I am rich: should I then believe I am rich? Clearly not.
(Int i X) => (Bel i x): If an agent intends something, then it believes... Suppose I have an intention to write a book; does this imply I believe I will write it? One could argue that, in general, it is too strong a requirement for a rational agent... While I certainly believe it is possible that I will succeed in my attention to write the book, I do not believe it is inevitable that I will do so...
(Bel i X) => (Int i X): If an agent believes something, then it intends it. Again, this is a kind of realism property... Suppose that I believe that X is true: should I then adopt X as an intention? Clearly not. This would imply that I would choose and commit to everything that I believed was true. Intending something implies selecting it and committing resources to achieving it. It makes no sense to suggest committing resources to achieving something that is already true.
These formulae are a useful starting point for our analysis of the possible relationships that exist among the three components of an agent's mental state. However, it is clear that a finer-grained analysis of the relationships is likely to yield more intuitively reasonable results.
Varieties of Realism
(Int i X) => ¬(Des i ¬X)
(Des i X) => ¬(Int i ¬X)
These properties say that an agent's intentions are consistent with its desires, and conversely, its desires are consistent with its intentions... These schemas, which capture intention-desire consistency, appear to be reasonable properties to demand of rational agents in some, but not all circumstances... Under certain circumstances, it makes sense for an agent to reconsider its intentions - to deliberate over them, and possibly change focus. This implies entertaining options (desires) that are not necessarily consistent with its current intentions...
(Bel i X) => ¬(Des i ¬X)
(Des i X) => ¬(Bel i ¬X)
These schemas capture belief-desire consistency. As an example of the first, if I believe it is raining, there is no point in desiring it is not raining, since I will not be able to change what is already the case. As for the second, on first consideration, this schema seeems unreasonable. For example, I may desire to be rich while believing that I am not currently rich. But when we distinguish between present-directed and future-directed desires and beliefs, the property makes sense for rational agents...
Systems of BDI Logic
The Side-Effect Problem
The side-effect problem is illustrated by the following scenario: "Janine intends to visit the dentist in order to have a tooth pulled. She is aware that as a consequence of having a tooth pulled, she will suffer pain. Does Janine intend to suffer pain?"
... It is generally agreed that rational agents do not have to intend the consequences of their intentions. In other words, Janine can intend to have a tooth pulled, believing that this will cause pain, without intending to suffer pain.
25.1-3, Reasoning About Rational Agents
Notes taken from 'Reasoning About Rational Agents' (2000), by Michael Wooldridge
1, Rational Agents
(Properties of Rational Agents, A Software Engineering Perspective, Belief-Desire-Intention Agents, Reasoning About Belief-Desire-Intention Agents, FAQ)
2, The Belief-Desire-Intention Model
(Practical Reasoning, Intentions in Practical Reasoning, Implementing Rational Agents, The Deliberation Process, Commitment Strategies, Intention Reconsideration, Mental States and Computer Programs)
3, Introduction to LORA
This logic (LORA: "Logic of Rational Agents") allows us to represent the properties of rational agents and reason about them in an unambiguous, well-defined way.
Like any logic, LORA has a syntax, a semantics, and a proof theory. The syntax of LORA defines a set of acceptable constructions known as well-formed formulaue (or just formulae). The semantics assign a precise meaning to every formula of LORA. Finally, the proof theory of LORA tells us some basic properties of the logic, and how we can establish properties of the logic.
The language of LORA combines four distinct components:
1. A first-order component, which is in essence classical first-order logic...
2. A belief-desire-intention component, which allows us to express the beliefs, desires, and intentions of agents within a system.
3. A temporal component, which allows us to represent the dynamic aspects of systems - how they vary over time.
4. An action component, which allows us to represent the actions that agents perform, and the effects of these actions.
1, Rational Agents
(Properties of Rational Agents, A Software Engineering Perspective, Belief-Desire-Intention Agents, Reasoning About Belief-Desire-Intention Agents, FAQ)
2, The Belief-Desire-Intention Model
(Practical Reasoning, Intentions in Practical Reasoning, Implementing Rational Agents, The Deliberation Process, Commitment Strategies, Intention Reconsideration, Mental States and Computer Programs)
3, Introduction to LORA
This logic (LORA: "Logic of Rational Agents") allows us to represent the properties of rational agents and reason about them in an unambiguous, well-defined way.
Like any logic, LORA has a syntax, a semantics, and a proof theory. The syntax of LORA defines a set of acceptable constructions known as well-formed formulaue (or just formulae). The semantics assign a precise meaning to every formula of LORA. Finally, the proof theory of LORA tells us some basic properties of the logic, and how we can establish properties of the logic.
The language of LORA combines four distinct components:
1. A first-order component, which is in essence classical first-order logic...
2. A belief-desire-intention component, which allows us to express the beliefs, desires, and intentions of agents within a system.
3. A temporal component, which allows us to represent the dynamic aspects of systems - how they vary over time.
4. An action component, which allows us to represent the actions that agents perform, and the effects of these actions.
24.12, An Introduction to Multiagent Systems
Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge
12, Logics for Multiagent Systems
(Why Modal Logic?, Possible-Worlds Semantics for Modal Logics)
Normal Modal Logics
The basic possible-worlds approach has the following disadvantages as a multiagent epistemic logic:
- agents believe all valid formulae;
- agents' beliefs are closed under logical consequence;
- equivalent propositions are identical beliefs; and
- if agents are inconsistent, then they believe everything.
Epistemic Logic for Multiagent Systems
Pro-attitudes: Goals and Desires
An obvious approach to developing a logic of goals or desires is to adapt possible-worlds semantics. In this view, each goal-accessible world represents one way the world might be if the agent's goals were realised. However, this approach falls prey to the side effect problem, in that it predicts that agents have a goal of the logical consequences of their goals (cf. the logical omniscience problem). This is not a desirable property: one might have a goal of going to the dentist, with the necessary consequence of suffering pain, without having a goal of suffering pain.
Common and Distributed knowledge
Integrated Theories of Agency
When building intelligent agents - particularly agents that must interact with humans - it is important that a rational balance is achieved between the beliefs, goals, and intentions of agents.
"For example, the following are desirable properties of intention: an autonomous agent should act on its intentions, not in spite of them; adopt intentions it believes are feasible and forego those believed to be infeasible; keep (or commit to) intentions, but not forever; discharge those intentions believed to have been satisfied; alter inentions when relevant beliefs change; and adopt subsidiary intentions during plan formation." (Cohen and Levesque, 1990)
Recall the properties of intentions, as discussed in Chapter 4.
(1) Intentions pose problems for agents, who need to determine ways of achieving them.
(2) Intentions provide a 'filter' for adopting other intentions, which must not conflict.
(3) Agents track the success of their intentions, and are inclined to try again if their attempts fail.
(4) Agents believe their intentions are possible.
(5) Agents do not believe they will not bring about their intentions.
(6) Under certain circumstances, agents believe they will bring about their intentions.
(7) Agents need not intend all the expected side effects of their intentions.
Formal Methods in Agent-Oriented Software Engineering
12, Logics for Multiagent Systems
(Why Modal Logic?, Possible-Worlds Semantics for Modal Logics)
Normal Modal Logics
The basic possible-worlds approach has the following disadvantages as a multiagent epistemic logic:
- agents believe all valid formulae;
- agents' beliefs are closed under logical consequence;
- equivalent propositions are identical beliefs; and
- if agents are inconsistent, then they believe everything.
Epistemic Logic for Multiagent Systems
Pro-attitudes: Goals and Desires
An obvious approach to developing a logic of goals or desires is to adapt possible-worlds semantics. In this view, each goal-accessible world represents one way the world might be if the agent's goals were realised. However, this approach falls prey to the side effect problem, in that it predicts that agents have a goal of the logical consequences of their goals (cf. the logical omniscience problem). This is not a desirable property: one might have a goal of going to the dentist, with the necessary consequence of suffering pain, without having a goal of suffering pain.
Common and Distributed knowledge
Integrated Theories of Agency
When building intelligent agents - particularly agents that must interact with humans - it is important that a rational balance is achieved between the beliefs, goals, and intentions of agents.
"For example, the following are desirable properties of intention: an autonomous agent should act on its intentions, not in spite of them; adopt intentions it believes are feasible and forego those believed to be infeasible; keep (or commit to) intentions, but not forever; discharge those intentions believed to have been satisfied; alter inentions when relevant beliefs change; and adopt subsidiary intentions during plan formation." (Cohen and Levesque, 1990)
Recall the properties of intentions, as discussed in Chapter 4.
(1) Intentions pose problems for agents, who need to determine ways of achieving them.
(2) Intentions provide a 'filter' for adopting other intentions, which must not conflict.
(3) Agents track the success of their intentions, and are inclined to try again if their attempts fail.
(4) Agents believe their intentions are possible.
(5) Agents do not believe they will not bring about their intentions.
(6) Under certain circumstances, agents believe they will bring about their intentions.
(7) Agents need not intend all the expected side effects of their intentions.
Formal Methods in Agent-Oriented Software Engineering
24.5-11, An Introduction to Multiagent Systems
Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge
5, Reactive and Hybrid Agents
6, Multiagent Interactions
(Utilities and Preferences, Multiagent Encounters, Dominant Strategies and Nash Equilibria, Competitive and Zero-Sum Interactions, The Prisoner's Dilemma, Other Symmetric 2*2 Interactions, Dependence Relations in Multiagent Systems)
7, Reaching Agreements
(Mechanism Design, Auctions, Negotiation, Argumentation)
8, Communication
(Speech Acts, Agent Communication Languages, Ontologies for Agent Communication, Coordination Languages)
9, Working Together
(Cooperative Distributed Problem Solving, Task Sharing and Result Sharing, Result Sharing, Combining Task and Result Sharing, Handling Inconsistency, Coordination, Multiagent Planning and Synchronization)
10, Methodologies
(When is an Agent-Based Solution Appropriate?, Agent-Oriented Analysis and Design Techniques, Pitfalls of Agent Development, Mobile Agents)
11, Applications
5, Reactive and Hybrid Agents
6, Multiagent Interactions
(Utilities and Preferences, Multiagent Encounters, Dominant Strategies and Nash Equilibria, Competitive and Zero-Sum Interactions, The Prisoner's Dilemma, Other Symmetric 2*2 Interactions, Dependence Relations in Multiagent Systems)
7, Reaching Agreements
(Mechanism Design, Auctions, Negotiation, Argumentation)
8, Communication
(Speech Acts, Agent Communication Languages, Ontologies for Agent Communication, Coordination Languages)
9, Working Together
(Cooperative Distributed Problem Solving, Task Sharing and Result Sharing, Result Sharing, Combining Task and Result Sharing, Handling Inconsistency, Coordination, Multiagent Planning and Synchronization)
10, Methodologies
(When is an Agent-Based Solution Appropriate?, Agent-Oriented Analysis and Design Techniques, Pitfalls of Agent Development, Mobile Agents)
11, Applications
24.3-4, An Introduction to Multiagent Systems
Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge
3, Deductive Reasoning Agents
(Agents as Theorem Provers, Agent-Oriented Programming, Concurrent MetateM)
4, Practical Reasoning Agents
Practical Reasoning Equals Deliberation Plus Means-End Reasoning: Practical reasoning is reasoning directed towards actions - the process of figuring out what to do.
"Practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes." (Bratman, 1990)
Human practical reasoning appears to consist of at least two distinct activities. The first of these involves deciding what state of affairs we want to achieve (deliberation); the second process involves deciding how we want to achieve these states of affairs (means-end reasoning).
We refer to the states of affairs that an agent has chosen and committed to as its intentions.
Intentions play the following important roles in practical reasoning:
- Intentions drive means-end reasoning...
- Intentions persist...
- Intentions constrain future deliberation...
- Intentions influence beliefs upon which future practical reasoning is based...
Means-Ends Reasoning: A planner is a system that takes as input representations of the following:
(1) A goal, intention or a task. This is something that the agent wants to achieve, or a state of affairs that the agent wants to maintain or avoid.
(2) The current state of the environment - the agent's beliefs.
(3) The actions available to the agent.
(Implementing a Practical Reasoning Agent, HOMER: an Agent That Plans)
3, Deductive Reasoning Agents
(Agents as Theorem Provers, Agent-Oriented Programming, Concurrent MetateM)
4, Practical Reasoning Agents
Practical Reasoning Equals Deliberation Plus Means-End Reasoning: Practical reasoning is reasoning directed towards actions - the process of figuring out what to do.
"Practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes." (Bratman, 1990)
Human practical reasoning appears to consist of at least two distinct activities. The first of these involves deciding what state of affairs we want to achieve (deliberation); the second process involves deciding how we want to achieve these states of affairs (means-end reasoning).
We refer to the states of affairs that an agent has chosen and committed to as its intentions.
Intentions play the following important roles in practical reasoning:
- Intentions drive means-end reasoning...
- Intentions persist...
- Intentions constrain future deliberation...
- Intentions influence beliefs upon which future practical reasoning is based...
Means-Ends Reasoning: A planner is a system that takes as input representations of the following:
(1) A goal, intention or a task. This is something that the agent wants to achieve, or a state of affairs that the agent wants to maintain or avoid.
(2) The current state of the environment - the agent's beliefs.
(3) The actions available to the agent.
(Implementing a Practical Reasoning Agent, HOMER: an Agent That Plans)
24.2, An Introduction to Multiagent Systems
Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge
2, Intelligent Agents
An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.
Environments: Russell and Norvig (1995) suggest the following classification of environment properties:
- Accessible versus inaccessible...
- Deterministic versus non-deterministic...
- Static versus dynamic...
- Discrete versus continuous...
Intelligent Agents: The following list of the kinds of capabilities that we might expect an intelligent agent to have was suggested by Wooldridge and Jennings (1995):
- Reactivity...
- Proactiveness...
- Social ability...
... What turns out to be hard is building a system that achieves an effective balance between goal-directed and reactive behaviour.
(Agents and Objects, Agents and Expert Systems, Agents as Intentional Systems, Abstract Architectures for Intelligent Agents, How to Tell an Agent What to Do, Synthesizing Agents)
2, Intelligent Agents
An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.
Environments: Russell and Norvig (1995) suggest the following classification of environment properties:
- Accessible versus inaccessible...
- Deterministic versus non-deterministic...
- Static versus dynamic...
- Discrete versus continuous...
Intelligent Agents: The following list of the kinds of capabilities that we might expect an intelligent agent to have was suggested by Wooldridge and Jennings (1995):
- Reactivity...
- Proactiveness...
- Social ability...
... What turns out to be hard is building a system that achieves an effective balance between goal-directed and reactive behaviour.
(Agents and Objects, Agents and Expert Systems, Agents as Intentional Systems, Abstract Architectures for Intelligent Agents, How to Tell an Agent What to Do, Synthesizing Agents)
24.1, An Introduction to Multiagent Systems
Notes taken from 'An Introduction to Multiagent Systems' (2002), by Michael Wooldridge
1, Introduction
This book is about multiagent systems. It addresses itself to two key problems:
- How do we build agents that are capable of independent, autonomous action in order to successfully carry out the tasks that we delegate to them?
- How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out the tasks that we delegate to them, particularly when the other agents cannot be assumed to share the same interests/goals?
The first problem is that of agent design, and the second problem is that of society design. The two problems are not orthogonal - for example, in order to build a society of agents that work together effectively, it may help if we give members of the society models of the other agents in it.
The Vision Thing: "You are in desperate need of a last minute holiday somewhere warm and dry. After specifying your requirements to your personal digital assistant (PDA), it converses with a number of different Web sites, which sell services such as flights, hotel rooms, and hire cars. After hard negotiation on your behalf with a range of sites, your PDA presents you with a package holiday."
There are many basic research problems that need to be solved in order to make such a scenario work; such as:
- How do you state your preferences to your agents?
- How can your agent compare different deals from different vendors?
- What algorithms can your agent use to negotiate with other agents (so as to ensure you are not 'ripped off')?
Objections to Multiagent Systems: Is it not all just distributed/concurrent systems?
In multiagent systems, there are two important twists to the concurrent systems story.
- First, because agents are assumed to be autonomous - capable of making independent decision about what to do in order to satisfy their design objectives - it is generally assumed that synchronization and coordination structures in a multiagent system are not hardwired in at design time, as they typically are in standard concurrent/distributed systems. We therefore need mechanisms that will allow agents to synchronize and coordinate their activities at run time.
- Second, the encounters that occur among computing elements in a multiagent system are economic encounters, in the sense that they are encounters between self-interested entities. In a classic distributed/concurrent system, all the computing elements are implicitly assumed to share a common goal (of making the overall system function correctly). In multiagent systems, it is assumed instead that agents are primarily concerned with their own welfare (although of course they will be acting on behalf of some user/owner).
1, Introduction
This book is about multiagent systems. It addresses itself to two key problems:
- How do we build agents that are capable of independent, autonomous action in order to successfully carry out the tasks that we delegate to them?
- How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out the tasks that we delegate to them, particularly when the other agents cannot be assumed to share the same interests/goals?
The first problem is that of agent design, and the second problem is that of society design. The two problems are not orthogonal - for example, in order to build a society of agents that work together effectively, it may help if we give members of the society models of the other agents in it.
The Vision Thing: "You are in desperate need of a last minute holiday somewhere warm and dry. After specifying your requirements to your personal digital assistant (PDA), it converses with a number of different Web sites, which sell services such as flights, hotel rooms, and hire cars. After hard negotiation on your behalf with a range of sites, your PDA presents you with a package holiday."
There are many basic research problems that need to be solved in order to make such a scenario work; such as:
- How do you state your preferences to your agents?
- How can your agent compare different deals from different vendors?
- What algorithms can your agent use to negotiate with other agents (so as to ensure you are not 'ripped off')?
Objections to Multiagent Systems: Is it not all just distributed/concurrent systems?
In multiagent systems, there are two important twists to the concurrent systems story.
- First, because agents are assumed to be autonomous - capable of making independent decision about what to do in order to satisfy their design objectives - it is generally assumed that synchronization and coordination structures in a multiagent system are not hardwired in at design time, as they typically are in standard concurrent/distributed systems. We therefore need mechanisms that will allow agents to synchronize and coordinate their activities at run time.
- Second, the encounters that occur among computing elements in a multiagent system are economic encounters, in the sense that they are encounters between self-interested entities. In a classic distributed/concurrent system, all the computing elements are implicitly assumed to share a common goal (of making the overall system function correctly). In multiagent systems, it is assumed instead that agents are primarily concerned with their own welfare (although of course they will be acting on behalf of some user/owner).
Tuesday, 12 June 2007
Backward and Forward Reasoning in Agents
The reasoning core of hybrid agents, which exhibit both rational/deliberative and reactive behaviour, is a proof procedure (executed within an observe-think-act cycle) that combines forward and backward reasoning:
Backward Reasoning: Used primarily for planning, problem solving and other deliberative activities.
Forward Reasoning: Used primarily for reactivity to the environment, possibly including other agents.
Backward Reasoning: Used primarily for planning, problem solving and other deliberative activities.
Forward Reasoning: Used primarily for reactivity to the environment, possibly including other agents.
Conformance to Protocols
A protocol specifies the "rules of encounter" governing a dialogue between agents. It specifies which agent is allowed to say what in a given situation.
There are different levels of (an agent's) conformance to a protocol, as follows:
- Weak conformance - iff it will never utter an illegal dialogue move.
- Exhaustive conformance - iff it is weakly conformant and it will utter at least one dialogue move when required by the protocol.
- Robust conformance - iff it is exhaustively conformant and it utters the (special) dialogue more "not-understood" whenever it receives an illegal move from the other agent.
There are different levels of (an agent's) conformance to a protocol, as follows:
- Weak conformance - iff it will never utter an illegal dialogue move.
- Exhaustive conformance - iff it is weakly conformant and it will utter at least one dialogue move when required by the protocol.
- Robust conformance - iff it is exhaustively conformant and it utters the (special) dialogue more "not-understood" whenever it receives an illegal move from the other agent.
Deduction, Induction, Abduction
Deduction: An analytic process based on the application of the general rules to particular cases, with the inference of a result.
Induction: Synthetic reasoning which infers the rule from the case and the result.
Abduction: Another form of synthetic inference, but of the case from a rule and a result.
Induction: Synthetic reasoning which infers the rule from the case and the result.
Abduction: Another form of synthetic inference, but of the case from a rule and a result.
Friday, 8 June 2007
23, Conflict-free normative agents using assumption-based argumentation
Notes taken from 'Conflict-free normative agents using assumption-based argumentation' (2007), by Dorian Gaertner and Francesca Toni
"... We (map) a form of normative BDI agents onto assumption-based argumentation. By way of this mapping we equip our agents with the capability of resolving conflicts amongst norms, belifs, desires and intentions. This conflict resolution is achieved by using the agent's preferences, represented in a variety of formats..."
1, Introduction
Normative agents that are governed by social norms may see conflicts arise amongst their individual desires, or beliefs, or intentions. These conflicts may be resolved by rendering information (such as norms, beliefs, desires and intentions) defeasible and by enforcing preferences. In turn, argumentation has proved to be a useful technique for reasoning with defeasible information and preferences when conflicts may arise.
In this paper we adopt a model for normative agents, whereby agents hold beliefs, desires and intentions, as in a conventional BDI model, but these mental attitudes are seen as contexts and the relationship amongst them are given by means of bridge rules...
2, BDI+N Agents: Preliminaries
(Background (BDI+N agents), Norm Representation in BDI+N Agents, Example)
3, Conflict Avoidance
(Background (Assumption-based argumentation framework), Naive Translation into Assumption-Based Argumentation, Avoiding Conflicts using Assumption-Based Argumentation)
4, Conflict Resolution using Preferences
(Preferences as a Total Ordering, Preferences as a Partial Ordering, Defining Dynamic Preferences via Meta-rules)
5, Conclusions
In this paper we have proposed to use assumption-based argumentation to solve conflicts that a normative agent can encounter, arising from applying conflicting norms but also due to conflicting beliefs, desires and intentions. We have employed qualitative preferences over an agent's beliefs, desires and intentions and over the norms it is subjected to in order to resolve conflicts...
"... We (map) a form of normative BDI agents onto assumption-based argumentation. By way of this mapping we equip our agents with the capability of resolving conflicts amongst norms, belifs, desires and intentions. This conflict resolution is achieved by using the agent's preferences, represented in a variety of formats..."
1, Introduction
Normative agents that are governed by social norms may see conflicts arise amongst their individual desires, or beliefs, or intentions. These conflicts may be resolved by rendering information (such as norms, beliefs, desires and intentions) defeasible and by enforcing preferences. In turn, argumentation has proved to be a useful technique for reasoning with defeasible information and preferences when conflicts may arise.
In this paper we adopt a model for normative agents, whereby agents hold beliefs, desires and intentions, as in a conventional BDI model, but these mental attitudes are seen as contexts and the relationship amongst them are given by means of bridge rules...
2, BDI+N Agents: Preliminaries
(Background (BDI+N agents), Norm Representation in BDI+N Agents, Example)
3, Conflict Avoidance
(Background (Assumption-based argumentation framework), Naive Translation into Assumption-Based Argumentation, Avoiding Conflicts using Assumption-Based Argumentation)
4, Conflict Resolution using Preferences
(Preferences as a Total Ordering, Preferences as a Partial Ordering, Defining Dynamic Preferences via Meta-rules)
5, Conclusions
In this paper we have proposed to use assumption-based argumentation to solve conflicts that a normative agent can encounter, arising from applying conflicting norms but also due to conflicting beliefs, desires and intentions. We have employed qualitative preferences over an agent's beliefs, desires and intentions and over the norms it is subjected to in order to resolve conflicts...
Tuesday, 5 June 2007
Topics of automated negotiation research
Taken from ‘Automated Negotiation: Prospects, Methods and Challenges’ (2001), by N. R. Jennings et al.
Automated negotiation research can be considered to deal with three broad topics:
- Negotiation Protocols: the set of rules that govern the interaction...
- Negotiation Objects: the range of issues over which agreement must be reached...
- Agents’ Decision Making Models: the decision making apparatus the participants employ to act in line with the negotiation protocol in order to achieve their objectives...
Automated negotiation research can be considered to deal with three broad topics:
- Negotiation Protocols: the set of rules that govern the interaction...
- Negotiation Objects: the range of issues over which agreement must be reached...
- Agents’ Decision Making Models: the decision making apparatus the participants employ to act in line with the negotiation protocol in order to achieve their objectives...
22, The Carneades Argumentation Framework
Notes taken from ‘The Carneades Argumentation Framework (Using Presumptions and Exceptions to Model Critical Questions)’ (2003), by Thomas F. Gordon and Douglas Walton
“We present a formal, mathematical model of argument structure and evaluation, called the Carneades Argumentation Framework… (which) uses three kinds of premises (ordinary premises, presumptions and exceptions) and information about the dialectical status of arguments (undisputed, at issue, accepted or rejected) to model critical questions in such a way to allow the burden of proof to be allocated to the proponent or the respondent, as appropriate.”
1, Introduction
The Carneades Argumentation Framework uses the device of critical questions to evaluate an argument... The evaluation of arguments in Carneades depends on the state of the dialog. Whether or not a premise of an argument holds depends on whether it is undisputed, at issue, or decided. One way to raise an issue is to ask a critical question. Also, the proof standard applicable for some issue may depend on the stage of the dialog. In a deliberation dialog, for example, a weak burden of proof would seem appropriate during brainstorming, in an early phase of the dialog...
2, Argument Structure...
3, Argument Evaluation...
4, Conclusion...
“We present a formal, mathematical model of argument structure and evaluation, called the Carneades Argumentation Framework… (which) uses three kinds of premises (ordinary premises, presumptions and exceptions) and information about the dialectical status of arguments (undisputed, at issue, accepted or rejected) to model critical questions in such a way to allow the burden of proof to be allocated to the proponent or the respondent, as appropriate.”
1, Introduction
The Carneades Argumentation Framework uses the device of critical questions to evaluate an argument... The evaluation of arguments in Carneades depends on the state of the dialog. Whether or not a premise of an argument holds depends on whether it is undisputed, at issue, or decided. One way to raise an issue is to ask a critical question. Also, the proof standard applicable for some issue may depend on the stage of the dialog. In a deliberation dialog, for example, a weak burden of proof would seem appropriate during brainstorming, in an early phase of the dialog...
2, Argument Structure...
3, Argument Evaluation...
4, Conclusion...
Subscribe to:
Posts (Atom)