- ASCII = American Standard Code for Information Interchange: this is the 7-bit character encoding system (7 bits so a total of 127 characters) and forms the basis of the other two systems.
- ISO = International Organisation for Standardisation: this is the 8-bit character encoding system which has a number of different versions each supporting a different set of languages (e.g. ISO-8859-1, ISO-8859-2 etc).
- UTF = Unicode Transformation Format: this is the system which uses a variable number of bytes (usually 1-4) to encode a character and hence supports a significantly larger number of characters than the other two systems.
Showing posts with label computing. Show all posts
Showing posts with label computing. Show all posts
Tuesday, 24 December 2019
ASCII, ISO and UTF
I struggle to remember what the ASCII, ISO and UTF acronyms stand for so I'm typing them up here in the hope that they stick!
Saturday, 2 March 2013
When methods and functions do too much
It's like asking a robot to get you some milk and it comes back to you with flavoured milk. You might want flavoured milk, but you might not! Your methods should only do as much as they are contracted to do. No more.
Here's another example: you create an Android
Sometimes doing more is less helpful.
Here's another example: you create an Android
AlertDialog
with the AlertDialog.Builder
class. Now every time you click one of the buttons of your AlertDialog
, the AlertDialog
is automically dismissed, as well as notifying you that a button was clicked. But you don't want the AlertDialog
to be dismissed, you only want to be notified that a button was clicked!Sometimes doing more is less helpful.
Monday, 4 May 2009
Computing for Kids
I need to demonstrate/explain Computing to young children (9 years of age) in a fun/interactive way. I thought of the following group exercises:
Find Median - split children into teams of 7 or 9, have each team make a line, give each child in the team a number (in no particular order), ask the children in their teams to work out the middle (median) number. The key is for them to first assign a captain and order themselves by their numbers (highest to lowest or lowest to highest).
Bubble Sort - split children into teams, have each team make a line with each child spaced out from the next by one metre, give each child in the team a number (in no particular order), ask the children to sort themselves (highest to lowest or lowest to highest) by only being allowed to speak to the person immediately in front or behind. Can't get around this problem by assigning a captain!
Resource Allocation - split children into teams of 7ish, have each team make a circle, give each child in the team an item (chocolate?) and a goal (item to obtain), ask the children to maximise the number of "happy" children in their teams. Need to think of cases involving conflict.
Find Median - split children into teams of 7 or 9, have each team make a line, give each child in the team a number (in no particular order), ask the children in their teams to work out the middle (median) number. The key is for them to first assign a captain and order themselves by their numbers (highest to lowest or lowest to highest).
Bubble Sort - split children into teams, have each team make a line with each child spaced out from the next by one metre, give each child in the team a number (in no particular order), ask the children to sort themselves (highest to lowest or lowest to highest) by only being allowed to speak to the person immediately in front or behind. Can't get around this problem by assigning a captain!
Resource Allocation - split children into teams of 7ish, have each team make a circle, give each child in the team an item (chocolate?) and a goal (item to obtain), ask the children to maximise the number of "happy" children in their teams. Need to think of cases involving conflict.
Monday, 26 November 2007
37, An implementation of norm-based agent negotiation
Notes taken from 'An implementation of norm-based agent negotiation' (2007), by Peter Dijkstra, Henry Prakken, Kees de Vey Mestdagh
1, Introduction
2, The Problem of Regulated Information Exchange
3, Requirements for the Multi-Agent Architecture
Knowledge: In order to regulate distributed information exchange, agents must have knowledge of the relevant regulations and the local interpretations of those regulations, their goals and the likely consequences of their actions...
Reasoning: ... the agents should be capable of generating and evaluating arguments for and against certain claims and they must be able to revise their beliefs as a result of the dialogues. Finally, in order to generate conditional offers, the agents should be able to do some form of hypothetical reasoning.
Communication: ...
4, Formalisation
Dialogical interaction: Communication language; Communication protocol
5, Agent Architecture
Description of the Components: User communication module; Database communication module; Agent communication language; Execution cycle module; Negotiation policy module; Argumentation system module
Negotiation Policy: ... Our negotiation policies cover two issues: the normative issue of whether accepting an offer is obligatory or forbidden, and the teleological issue whether accepting an offer violates the agent's own interests. Of course these policies can be different for the requesting and the responding agent... In the negotiation policy for a reject, the policy returns a why-reject move which starts an embedded persuasion dialogue. The specification and implementation of embedded persuasion dialogues will be the subject of future research.
Agent execution cycle: The agent execution cycle processes messages and triggers other modules during the selection of the appropriate dialogue moves. First, the speech act, locution and content are parsed from the incoming message, then depending on the locution (offer, accept, withdraw or reject) the next steps are taken... The execution cycle can be represented in Java pseudo-code...
6, Illustration of the Agent Architecture
Knowledge base: Knowledge is represented in the prolog-like syntax of the ASPIC tool...
Dialogue from example 2: ...
7, Conclusion
1, Introduction
2, The Problem of Regulated Information Exchange
3, Requirements for the Multi-Agent Architecture
Knowledge: In order to regulate distributed information exchange, agents must have knowledge of the relevant regulations and the local interpretations of those regulations, their goals and the likely consequences of their actions...
Reasoning: ... the agents should be capable of generating and evaluating arguments for and against certain claims and they must be able to revise their beliefs as a result of the dialogues. Finally, in order to generate conditional offers, the agents should be able to do some form of hypothetical reasoning.
Communication: ...
4, Formalisation
Dialogical interaction: Communication language; Communication protocol
5, Agent Architecture
Description of the Components: User communication module; Database communication module; Agent communication language; Execution cycle module; Negotiation policy module; Argumentation system module
Negotiation Policy: ... Our negotiation policies cover two issues: the normative issue of whether accepting an offer is obligatory or forbidden, and the teleological issue whether accepting an offer violates the agent's own interests. Of course these policies can be different for the requesting and the responding agent... In the negotiation policy for a reject, the policy returns a why-reject move which starts an embedded persuasion dialogue. The specification and implementation of embedded persuasion dialogues will be the subject of future research.
Agent execution cycle: The agent execution cycle processes messages and triggers other modules during the selection of the appropriate dialogue moves. First, the speech act, locution and content are parsed from the incoming message, then depending on the locution (offer, accept, withdraw or reject) the next steps are taken... The execution cycle can be represented in Java pseudo-code...
6, Illustration of the Agent Architecture
Knowledge base: Knowledge is represented in the prolog-like syntax of the ASPIC tool...
Dialogue from example 2: ...
7, Conclusion
Sunday, 4 November 2007
Distinguishing Agents
Quotes taken from 'Agent Technology for e-Commerce' (2007), Maria Fasli
A paradigm shift (page 5):
"... What distinguishes agents from other pieces of software is that computation is not simply calculation, but delegation and interaction; users do not act upon agents as they do with other software programs, but they delegate tasks to them and interact with them in a conversational rather than in a command mode. Intrinsically, agents enable the transition from simple static algorithmic-based computation to dynamic interactive delegation-based service-oriented computation..."
The novelty in agents (page 8):
"So what is it that makes agents different, over and beyond other software? Whereas traditional software applications need to be told explicitly what it is that they need to accomplish and the exact steps that they have to perform, agents need to be told what the goal is but not how to achieve it. Then, being 'smart', they will actively seek ways to satisfy this goal, acting with the minimum intervention from the user. Agents will figure out what needs to be done to ahieve the delegated goal, but also react to any changes in the environment as they occur, which may affect their plans and goal accomplishment, and then subsequently modify their course of action..."
A paradigm shift (page 5):
"... What distinguishes agents from other pieces of software is that computation is not simply calculation, but delegation and interaction; users do not act upon agents as they do with other software programs, but they delegate tasks to them and interact with them in a conversational rather than in a command mode. Intrinsically, agents enable the transition from simple static algorithmic-based computation to dynamic interactive delegation-based service-oriented computation..."
The novelty in agents (page 8):
"So what is it that makes agents different, over and beyond other software? Whereas traditional software applications need to be told explicitly what it is that they need to accomplish and the exact steps that they have to perform, agents need to be told what the goal is but not how to achieve it. Then, being 'smart', they will actively seek ways to satisfy this goal, acting with the minimum intervention from the user. Agents will figure out what needs to be done to ahieve the delegated goal, but also react to any changes in the environment as they occur, which may affect their plans and goal accomplishment, and then subsequently modify their course of action..."
Saturday, 22 September 2007
32, Reaching Agreements Through Argumentation
Notes taken from 'Reaching agreements through argumentation: a logical model and implementation' (1998), Sarit Kraus, Katia Sycara, Amir Evenchik
1, Introduction
2, The Mental Model
Classification of intentions:
- "Intend-to-do", refers to actions within the direct control of the agent.
- "Intend-that", refers to propositions not directly within the agent's realm of control, that the agent must rely on other agents for satisfying.
(The Formal Model, Syntax, Semantics)
Agent Types: Bounded Agent, An Omniscient Agent, A Knowledgeable Agent, An Unforgetful Agent, A Memoryless Agent, A Non-observer, Cooperative Agents
3, Axioms for Argumentation and for Argument Evaluation
The argument types we present (in order of decreasing strength) are:
(1) Threats to produce goal adoption or goal abandonment on the part of the persuadee.
(2) Enticing the persuadee with a promise of a future reward.
(3) Appeal to past reward.
(4) Appeal to precendents as counterexamples to convey to the persuadee a contradiction between what she/he says and past actions.
(5) Appealing to "prevailing practice" to convey to the persuadee that the proposed action will further his/her goals since it has furthered others' goals in the past.
(6) Appeal to self-interest to convince a persuadee that taking this action will enable achievement of a high-importance goal.
"... Agents with different spheres of expertise may need to negotiate with each other for the sake of requesting each others' services. Their expertise is also their bargaining power..."
(Arguments Involving Threats, Evaluation of Threats, Promise of a Future Reward, Appeal to Past Promise, Appeal to "Prevailing Practice", Appeal to Self Interest, Selecting Arguments by an Agent's Type, An Example: Labor Union vs. Management Negotiation, Contract Net Example)
4, Automated Negotiation Agent (ANA)
The general structure of an agent consists of the following main parts:
- Mental state (beliefs, desires, goals, intentions)
- Characteristics (agent type, capabilities, belief verification capabilities)
- Inference rules (mental state update, argument generation, argument selection, request evaluation)
(The Structure of an Agent and its Life Cycle, Inference Rules for Mental State Changes, Argument Production and Evaluation, Argument Selection Rules, Request Evaluation Rules, The Blocks World Environment, Simulation of a Blocks World Scenario)
5, Related Work
(Mental State, Agent Oriented Languages, Multi-agent Planning, Automated Negotiation, Defeasible Reasoning and Computational Dialectics, Game Theory's Models of Negotiation, Social Psychology)
6, Conclusions
1, Introduction
2, The Mental Model
Classification of intentions:
- "Intend-to-do", refers to actions within the direct control of the agent.
- "Intend-that", refers to propositions not directly within the agent's realm of control, that the agent must rely on other agents for satisfying.
(The Formal Model, Syntax, Semantics)
Agent Types: Bounded Agent, An Omniscient Agent, A Knowledgeable Agent, An Unforgetful Agent, A Memoryless Agent, A Non-observer, Cooperative Agents
3, Axioms for Argumentation and for Argument Evaluation
The argument types we present (in order of decreasing strength) are:
(1) Threats to produce goal adoption or goal abandonment on the part of the persuadee.
(2) Enticing the persuadee with a promise of a future reward.
(3) Appeal to past reward.
(4) Appeal to precendents as counterexamples to convey to the persuadee a contradiction between what she/he says and past actions.
(5) Appealing to "prevailing practice" to convey to the persuadee that the proposed action will further his/her goals since it has furthered others' goals in the past.
(6) Appeal to self-interest to convince a persuadee that taking this action will enable achievement of a high-importance goal.
"... Agents with different spheres of expertise may need to negotiate with each other for the sake of requesting each others' services. Their expertise is also their bargaining power..."
(Arguments Involving Threats, Evaluation of Threats, Promise of a Future Reward, Appeal to Past Promise, Appeal to "Prevailing Practice", Appeal to Self Interest, Selecting Arguments by an Agent's Type, An Example: Labor Union vs. Management Negotiation, Contract Net Example)
4, Automated Negotiation Agent (ANA)
The general structure of an agent consists of the following main parts:
- Mental state (beliefs, desires, goals, intentions)
- Characteristics (agent type, capabilities, belief verification capabilities)
- Inference rules (mental state update, argument generation, argument selection, request evaluation)
(The Structure of an Agent and its Life Cycle, Inference Rules for Mental State Changes, Argument Production and Evaluation, Argument Selection Rules, Request Evaluation Rules, The Blocks World Environment, Simulation of a Blocks World Scenario)
5, Related Work
(Mental State, Agent Oriented Languages, Multi-agent Planning, Automated Negotiation, Defeasible Reasoning and Computational Dialectics, Game Theory's Models of Negotiation, Social Psychology)
6, Conclusions
Labels:
argumentation,
computing,
multiagent systems,
negotiation
Tuesday, 17 July 2007
27, On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation
Notes from 'On the Benefits of Exploiting Hierarchical Goals in Bilateral Automated Negotiation' (2007), by Iyad Rahwan et al.
1, Introduction
2, Preliminaries
(Allocation, Utility functions, Payment, Deal, Utility of a Deal for an Agent, Rational Deals for an Agent, Individual Rational Deals)
3, Bargaining Protocol
(Dialogue History, Protocol-Reachable Deal)
4, Underlying Interests
(Partial Plan, Complete Plan, Individual Capability, Individually Achievable Plans, Utility of a Plan, Utility)
5, Mutual Interests
(Committed Goals, Achievable Plans)
6, Case Study: An IBN Protocol
7, Conclusion
1, Introduction
2, Preliminaries
(Allocation, Utility functions, Payment, Deal, Utility of a Deal for an Agent, Rational Deals for an Agent, Individual Rational Deals)
3, Bargaining Protocol
(Dialogue History, Protocol-Reachable Deal)
4, Underlying Interests
(Partial Plan, Complete Plan, Individual Capability, Individually Achievable Plans, Utility of a Plan, Utility)
5, Mutual Interests
(Committed Goals, Achievable Plans)
6, Case Study: An IBN Protocol
7, Conclusion
Tuesday, 26 June 2007
26.4-6, Argument-based Negotiation among BDI Agents
Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari
4, Collaborative Agents
Collaborative MAS: A collaborative Multi-Agent System will be a pair of a set of argumentative BDI agents and a set of shared beliefs.
(Negotiating Beliefs; Proposals and Counterproposals; Side-effects; Failure in the Negotiation)
5, Communication Languages
(Interaction Protocol; Interaction Language; Negotiation Primitives)
6, Conclusions and Future Work...
4, Collaborative Agents
Collaborative MAS: A collaborative Multi-Agent System will be a pair of a set of argumentative BDI agents and a set of shared beliefs.
(Negotiating Beliefs; Proposals and Counterproposals; Side-effects; Failure in the Negotiation)
5, Communication Languages
(Interaction Protocol; Interaction Language; Negotiation Primitives)
6, Conclusions and Future Work...
Labels:
argumentation,
computing,
dialogues,
logic,
multiagent systems,
negotiation
26.3, Argument-based Negotiation among BDI Agents
Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari
3, Planning and Argumentation
Argumentative BDI Agent: The agents desires D will be represented by a set of literals that will also be called goals. A subset of D will represent a set of committed goals and will be referred to as the agent intentions... The agent's beliefs will be represented by a restricted Defeasible Logic Program... Besides its beliefs, desires and intentions, an agent will have a set of actions that it may use to change its world.
Action: An action A is an ordered triple (P, X, C), where P is a set of literals representing preconditions for A, X is a consistent set of literals representing consequences of executing A, and C is a set of constraints of the form not L, where L is a literal.
Applicable Action...
Action Effect...
3, Planning and Argumentation
Argumentative BDI Agent: The agents desires D will be represented by a set of literals that will also be called goals. A subset of D will represent a set of committed goals and will be referred to as the agent intentions... The agent's beliefs will be represented by a restricted Defeasible Logic Program... Besides its beliefs, desires and intentions, an agent will have a set of actions that it may use to change its world.
Action: An action A is an ordered triple (P, X, C), where P is a set of literals representing preconditions for A, X is a consistent set of literals representing consequences of executing A, and C is a set of constraints of the form not L, where L is a literal.
Applicable Action...
Action Effect...
Labels:
argumentation,
computing,
dialogues,
logic,
multiagent systems,
negotiation
26.1-2, Argument-based Negotiation among BDI Agents
Notes taken from 'Argument-based Negotiation among BDI Agents' (2002), by Sonia V. Rueda, Alejandro J. Garcia, Guillermo R. Simari
"... Here we propose a deliberative mechanism for negotiation among BDI agents based in Argumentation."
1, Introduction
In a BDI agent, mental attitudes are used to model its cognitive capabilities. These mental attitudes include Beliefs, Desires and Intentions among others such as preferences, obligations, commitments, etc. These attitudes represent motivations of the agent and its informational and deliberative states which are used to determine its behaviour.
Agents will use a formalism based in argumentation in order to obtain plans for their goals represented by literals. They will begin by trying to construct a warrant for the goal. That might not be possible because some need literals are not available. The agent will try to obtain those missing literals, regarded as subgoals, by executing the actions it has available. When no action can achieve the subgoals the agent will request collaboration...
2, The Construction of a BDI Agent's Plan
Practical reasoning involves two fundamental processes: decide what goals are going to be pursued, and choose a plan on how to achieve them... The selected options will make up the agent's intentions; they will also have an influence on its actions, restrict future practical reasoning, and persist (in some way) in time...
... Abilities are associated with actions that have preconditions and consequences...
"... Here we propose a deliberative mechanism for negotiation among BDI agents based in Argumentation."
1, Introduction
In a BDI agent, mental attitudes are used to model its cognitive capabilities. These mental attitudes include Beliefs, Desires and Intentions among others such as preferences, obligations, commitments, etc. These attitudes represent motivations of the agent and its informational and deliberative states which are used to determine its behaviour.
Agents will use a formalism based in argumentation in order to obtain plans for their goals represented by literals. They will begin by trying to construct a warrant for the goal. That might not be possible because some need literals are not available. The agent will try to obtain those missing literals, regarded as subgoals, by executing the actions it has available. When no action can achieve the subgoals the agent will request collaboration...
2, The Construction of a BDI Agent's Plan
Practical reasoning involves two fundamental processes: decide what goals are going to be pursued, and choose a plan on how to achieve them... The selected options will make up the agent's intentions; they will also have an influence on its actions, restrict future practical reasoning, and persist (in some way) in time...
... Abilities are associated with actions that have preconditions and consequences...
Labels:
argumentation,
computing,
dialogues,
logic,
multiagent systems,
negotiation
Friday, 8 June 2007
23, Conflict-free normative agents using assumption-based argumentation
Notes taken from 'Conflict-free normative agents using assumption-based argumentation' (2007), by Dorian Gaertner and Francesca Toni
"... We (map) a form of normative BDI agents onto assumption-based argumentation. By way of this mapping we equip our agents with the capability of resolving conflicts amongst norms, belifs, desires and intentions. This conflict resolution is achieved by using the agent's preferences, represented in a variety of formats..."
1, Introduction
Normative agents that are governed by social norms may see conflicts arise amongst their individual desires, or beliefs, or intentions. These conflicts may be resolved by rendering information (such as norms, beliefs, desires and intentions) defeasible and by enforcing preferences. In turn, argumentation has proved to be a useful technique for reasoning with defeasible information and preferences when conflicts may arise.
In this paper we adopt a model for normative agents, whereby agents hold beliefs, desires and intentions, as in a conventional BDI model, but these mental attitudes are seen as contexts and the relationship amongst them are given by means of bridge rules...
2, BDI+N Agents: Preliminaries
(Background (BDI+N agents), Norm Representation in BDI+N Agents, Example)
3, Conflict Avoidance
(Background (Assumption-based argumentation framework), Naive Translation into Assumption-Based Argumentation, Avoiding Conflicts using Assumption-Based Argumentation)
4, Conflict Resolution using Preferences
(Preferences as a Total Ordering, Preferences as a Partial Ordering, Defining Dynamic Preferences via Meta-rules)
5, Conclusions
In this paper we have proposed to use assumption-based argumentation to solve conflicts that a normative agent can encounter, arising from applying conflicting norms but also due to conflicting beliefs, desires and intentions. We have employed qualitative preferences over an agent's beliefs, desires and intentions and over the norms it is subjected to in order to resolve conflicts...
"... We (map) a form of normative BDI agents onto assumption-based argumentation. By way of this mapping we equip our agents with the capability of resolving conflicts amongst norms, belifs, desires and intentions. This conflict resolution is achieved by using the agent's preferences, represented in a variety of formats..."
1, Introduction
Normative agents that are governed by social norms may see conflicts arise amongst their individual desires, or beliefs, or intentions. These conflicts may be resolved by rendering information (such as norms, beliefs, desires and intentions) defeasible and by enforcing preferences. In turn, argumentation has proved to be a useful technique for reasoning with defeasible information and preferences when conflicts may arise.
In this paper we adopt a model for normative agents, whereby agents hold beliefs, desires and intentions, as in a conventional BDI model, but these mental attitudes are seen as contexts and the relationship amongst them are given by means of bridge rules...
2, BDI+N Agents: Preliminaries
(Background (BDI+N agents), Norm Representation in BDI+N Agents, Example)
3, Conflict Avoidance
(Background (Assumption-based argumentation framework), Naive Translation into Assumption-Based Argumentation, Avoiding Conflicts using Assumption-Based Argumentation)
4, Conflict Resolution using Preferences
(Preferences as a Total Ordering, Preferences as a Partial Ordering, Defining Dynamic Preferences via Meta-rules)
5, Conclusions
In this paper we have proposed to use assumption-based argumentation to solve conflicts that a normative agent can encounter, arising from applying conflicting norms but also due to conflicting beliefs, desires and intentions. We have employed qualitative preferences over an agent's beliefs, desires and intentions and over the norms it is subjected to in order to resolve conflicts...
Tuesday, 24 April 2007
19, Assumption-based argumentation for epistemic and practical reasoning
Notes taken from 'Assumption-based argumentation for epistemic and practical reasoning' (2007), by Francesca Toni
"Assumption-based argumentation can serve as an effective computational tool for argumentation-based epistemic and practical reasoning, as required in a number of applications. In this paper we substantiate this claim by presenting formal mappings from frameworks for epistemic and practical reasoning onto assumption-based argumentation frameworks..."
1, Introduction
... In this paper, we consider two forms of reasoning that rational agents may need to perform, namely reasoning as to which beliefs they should hold (epistemic) and reasoning as to which course of action/decision they should choose (practical)...
2, Abstract and assumption-based argumentation...
3, Epistemic Reasoning...
3.1, Epistemic frameworks without preference rules...
3.2, Epistemic frameworks with preference rules...
4, Practical reasoning...
5, Example...
6, Conclusions
We have proposed concrete instances of assumption-based argumentation for epistemic reasoning... and practical reasoning...
... Within the ARGUGRID project, our approach to (epistemic and) practical reasoning can be used to model decisions concerning the orchestration of services available over the grid, taking into account preferences by the users and/or the service providers...
"Assumption-based argumentation can serve as an effective computational tool for argumentation-based epistemic and practical reasoning, as required in a number of applications. In this paper we substantiate this claim by presenting formal mappings from frameworks for epistemic and practical reasoning onto assumption-based argumentation frameworks..."
1, Introduction
... In this paper, we consider two forms of reasoning that rational agents may need to perform, namely reasoning as to which beliefs they should hold (epistemic) and reasoning as to which course of action/decision they should choose (practical)...
2, Abstract and assumption-based argumentation...
3, Epistemic Reasoning...
3.1, Epistemic frameworks without preference rules...
3.2, Epistemic frameworks with preference rules...
4, Practical reasoning...
5, Example...
6, Conclusions
We have proposed concrete instances of assumption-based argumentation for epistemic reasoning... and practical reasoning...
... Within the ARGUGRID project, our approach to (epistemic and) practical reasoning can be used to model decisions concerning the orchestration of services available over the grid, taking into account preferences by the users and/or the service providers...
Friday, 13 April 2007
Agents, AI and the Semantic Web
Quotes taken from 'A Semantic Web Primer' (2004), by Grigoris Antoniou and Frank van Harmelen
(page 199, AI and Web Services)
Web services are an application area where Artificial Intelligence techniques can be used effectively, for instance, for matching between service offers and service requests, and for composing complex services from simpler services, where automated planning can be utilized.
(page 223, How it all fits together)
... we consider an automated bargaining scenario to see how all technologies discussed fit together.
- Each bargaining party is represented by a software agent...
- The agents need to agree on the meaning of certain terms by committing to a shared ontology, e.g., written in OWL.
- Case facts, offers, and decisions can be represented using RDF statements. These statements become really useful when linked to an ontology.
- Information is exchanged between the agents in some XML-based (or RDF-based) language.
- The agent negotiation strategies are described in a logical language.
- An agent decides about the next course of action through inferring conclusions from the negotiation strategy, case facts, and previous offers and counteroffers.
(page 199, AI and Web Services)
Web services are an application area where Artificial Intelligence techniques can be used effectively, for instance, for matching between service offers and service requests, and for composing complex services from simpler services, where automated planning can be utilized.
(page 223, How it all fits together)
... we consider an automated bargaining scenario to see how all technologies discussed fit together.
- Each bargaining party is represented by a software agent...
- The agents need to agree on the meaning of certain terms by committing to a shared ontology, e.g., written in OWL.
- Case facts, offers, and decisions can be represented using RDF statements. These statements become really useful when linked to an ontology.
- Information is exchanged between the agents in some XML-based (or RDF-based) language.
- The agent negotiation strategies are described in a logical language.
- An agent decides about the next course of action through inferring conclusions from the negotiation strategy, case facts, and previous offers and counteroffers.
Predicate Logic, Nonmonotonic Rules and Priorities
Quotes taken from 'A Semantic Web Primer' (2004), by Grigoris Antoniou and Frank van Harmelen
(page 94, An axiomatic semantics for RDF and RDF Schema)
... we formalize the meaning of the modeling primitives of RDF and RDF Schema. Thus we capture the semantics of RDF and RDFS.
The formal language we use is predicate logic , universally accepted as the foundation of all (symbolic) knowledge representation. Formulas used in the formalization are referred to as axioms.
By describing the semantics of RDF and RDFS in a formal language like logic we make the semantics unambiguous and machine accessible. Also, we provide a basis for reasoning support by automated reasoners manipulating logical formulas.
(page 161, Nonmonotonic rules: Motivation and syntax)
... we turn our attention to nonmonotonic rule systems. So far (i.e. with monotonic rules), once the premises of a rule were proved, the rule could be applied and its head could be derived as a conclusion. In nonmonotonic rule systems, a rule may not be applied even if all premises are known because we have to consider contrary reasoning chains. In general, the rules we consider from now are called defeasible, because they can be defeated by other rules. To allow conflicts between rules, negated atomic formulas may occur in the head and the body of rules...
... To distinguish between defeasible rules and standard, monotonic rules, we use a different arrow:
p(X) => q(X)
r(X) => ¬q(X)
In this example, given also the facts
p(a)
r(a)
we conclude neither q(a) nor ¬q(a). It is a typical example of two rules blocking each other. This conflict may be resolved using priorities among rules. Suppose we knew somehow that the first rule is stronger than the second; then we could indeed derive q(a).
Priorities arise naturally in practice, and may be based on various principles:
- The source of one rule may be more reliable than the source of the second, or may even have higher priority. For example, in law, federal law preempts state law...
- One rule may be preferred over another because it is more recent.
- One rule may be preferred over another because it is more specific. A typical example is a general rule with some exceptions; in such cases, the exceptions are stronger than the general rule.
Specificity may often be computed based on the given rules, but the other two principles cannot be determined from the logical formalization. Therefore, we abstract from the specific prioritization principle used, and assume the existence of an external priority relation on the set of rules. To express the relation syntactically, we extend the rule syntax to include a unique label, for example,
r1: p(X) => q(X)
r2: r(X) => ¬q(X)
Then we can write
r1 > r2
to specify that r1 is stronger than r2.
We do not impose many conditions on >. It is not even required that the rules form a complete ordering. We only require the priority relation to be acyclic. That is, it is impossible to have cycles of the form
r1 > r2 > ... rn > r1
Note that priorities are meant to resolve conflicts among competing rules. In simple cases two rules are competing only if the head of one rule is the negation of the head of the other. But in applications it is often the case that once a predicate p is derived, some other predicates are excluded from holding...
(page 94, An axiomatic semantics for RDF and RDF Schema)
... we formalize the meaning of the modeling primitives of RDF and RDF Schema. Thus we capture the semantics of RDF and RDFS.
The formal language we use is predicate logic , universally accepted as the foundation of all (symbolic) knowledge representation. Formulas used in the formalization are referred to as axioms.
By describing the semantics of RDF and RDFS in a formal language like logic we make the semantics unambiguous and machine accessible. Also, we provide a basis for reasoning support by automated reasoners manipulating logical formulas.
(page 161, Nonmonotonic rules: Motivation and syntax)
... we turn our attention to nonmonotonic rule systems. So far (i.e. with monotonic rules), once the premises of a rule were proved, the rule could be applied and its head could be derived as a conclusion. In nonmonotonic rule systems, a rule may not be applied even if all premises are known because we have to consider contrary reasoning chains. In general, the rules we consider from now are called defeasible, because they can be defeated by other rules. To allow conflicts between rules, negated atomic formulas may occur in the head and the body of rules...
... To distinguish between defeasible rules and standard, monotonic rules, we use a different arrow:
p(X) => q(X)
r(X) => ¬q(X)
In this example, given also the facts
p(a)
r(a)
we conclude neither q(a) nor ¬q(a). It is a typical example of two rules blocking each other. This conflict may be resolved using priorities among rules. Suppose we knew somehow that the first rule is stronger than the second; then we could indeed derive q(a).
Priorities arise naturally in practice, and may be based on various principles:
- The source of one rule may be more reliable than the source of the second, or may even have higher priority. For example, in law, federal law preempts state law...
- One rule may be preferred over another because it is more recent.
- One rule may be preferred over another because it is more specific. A typical example is a general rule with some exceptions; in such cases, the exceptions are stronger than the general rule.
Specificity may often be computed based on the given rules, but the other two principles cannot be determined from the logical formalization. Therefore, we abstract from the specific prioritization principle used, and assume the existence of an external priority relation on the set of rules. To express the relation syntactically, we extend the rule syntax to include a unique label, for example,
r1: p(X) => q(X)
r2: r(X) => ¬q(X)
Then we can write
r1 > r2
to specify that r1 is stronger than r2.
We do not impose many conditions on >. It is not even required that the rules form a complete ordering. We only require the priority relation to be acyclic. That is, it is impossible to have cycles of the form
r1 > r2 > ... rn > r1
Note that priorities are meant to resolve conflicts among competing rules. In simple cases two rules are competing only if the head of one rule is the negation of the head of the other. But in applications it is often the case that once a predicate p is derived, some other predicates are excluded from holding...
Wednesday, 4 April 2007
17, Information-seeking agent dialogs with permissions and arguments
Notes taken from ‘Information-seeking agent dialogs with permissions and arguments’ (2006), by Sylvie Doutre et al.
“… Many distributed information systems require agents to have appropriate authorisation to obtain access to information… We present a denotational semantics for such dialogs, drawing on Tuple Centres (programmable Tuple Spaces)…”
1, Introduction
… we present a formal syntax and semantics for such information-seeking dialogs involving permissions and arguments…
2.1, Dialog systems
The common elements of dialog systems are…
A typology of human dialogs was articulated by Walton and Krabbe, based upon the overall goal of the dialogue, the participants’ individual dialog goals, and the information they have at the commencement of the dialog (the topic language and the context)…
2.2, Tuple spaces
… a model of communication between distributed computational entities… The essential idea is that computational agents connected together may create named object stores, called tuples, which persist, even beyond the lifetimes of their creators, until explicitly deleted… They are stored in tuple spaces, which are black-board-like shared data stores, and are normally accessed by other agents by associative pattern matching… There are three basic operators on tuple spaces: out, rd, in…
2.3, LGL as a semantics for dialog systems
… (We) show how Law-Governed Linda (LGL) can be used as a denotational semantics for these systems, by associating elements of an LGL 5-tuple to the elements of the dialog system. Note that the dialog goal and the outcome rules have no associated elements in LGL…
3, Secure info-seek dialogue
3.1, Motivating example…
3.2, Protocol syntax
… In this system, an argument must be provided by an agent to justify it having permission to access some information. If access to information for agent x is refused by agent y, then agent x must try to persuade agent y that it should be allowed permission. This persuasion is made using arguments. If agent y yields to agent x’s arguments, then y provides x the information requested.
(Definitions given for Participants, Dialog goal, Context, Topic language, Communication language, Protocol, Effect rules, Outcome rules)
3.3, LGL semantics
(Associations to elements of an LGL 5-tuple given for elements of the dialog system: Participants, Context, Communication language, Protocol, Effect rules)
3.4, Illustration…
4, Implementation
In Section 1, we stated that our primary objective was the development of a semantics for these Information-seeking dialogs which facilitated implementation of the protocol. In order to assess whether the protocol and semantics of Section 3 met this objective, we undertook an implementation…
5, Related work and conclusions
… Our contribution in this paper is a novel semantics for information-seeking agent communications protocols involving permissions and arguments, in which utterances under the protocol are translated into commands in Law-Governed Linda and, through them, into actions on certain associated tuple spaces…
“… Many distributed information systems require agents to have appropriate authorisation to obtain access to information… We present a denotational semantics for such dialogs, drawing on Tuple Centres (programmable Tuple Spaces)…”
1, Introduction
… we present a formal syntax and semantics for such information-seeking dialogs involving permissions and arguments…
2.1, Dialog systems
The common elements of dialog systems are…
A typology of human dialogs was articulated by Walton and Krabbe, based upon the overall goal of the dialogue, the participants’ individual dialog goals, and the information they have at the commencement of the dialog (the topic language and the context)…
2.2, Tuple spaces
… a model of communication between distributed computational entities… The essential idea is that computational agents connected together may create named object stores, called tuples, which persist, even beyond the lifetimes of their creators, until explicitly deleted… They are stored in tuple spaces, which are black-board-like shared data stores, and are normally accessed by other agents by associative pattern matching… There are three basic operators on tuple spaces: out, rd, in…
2.3, LGL as a semantics for dialog systems
… (We) show how Law-Governed Linda (LGL) can be used as a denotational semantics for these systems, by associating elements of an LGL 5-tuple to the elements of the dialog system. Note that the dialog goal and the outcome rules have no associated elements in LGL…
3, Secure info-seek dialogue
3.1, Motivating example…
3.2, Protocol syntax
… In this system, an argument must be provided by an agent to justify it having permission to access some information. If access to information for agent x is refused by agent y, then agent x must try to persuade agent y that it should be allowed permission. This persuasion is made using arguments. If agent y yields to agent x’s arguments, then y provides x the information requested.
(Definitions given for Participants, Dialog goal, Context, Topic language, Communication language, Protocol, Effect rules, Outcome rules)
3.3, LGL semantics
(Associations to elements of an LGL 5-tuple given for elements of the dialog system: Participants, Context, Communication language, Protocol, Effect rules)
3.4, Illustration…
4, Implementation
In Section 1, we stated that our primary objective was the development of a semantics for these Information-seeking dialogs which facilitated implementation of the protocol. In order to assess whether the protocol and semantics of Section 3 met this objective, we undertook an implementation…
5, Related work and conclusions
… Our contribution in this paper is a novel semantics for information-seeking agent communications protocols involving permissions and arguments, in which utterances under the protocol are translated into commands in Law-Governed Linda and, through them, into actions on certain associated tuple spaces…
Labels:
argumentation,
computing,
dialogues,
information-seeking,
persuasion
Tuesday, 3 April 2007
16, Dialogues for Negotiation
Notes taken from ‘Dialogues for Negotiation: Agent Varieties and Dialogue Sequences’ (2001), by Fariba Sadri, Francesca Toni and Paolo Torroni
“… (The proposed solution) relies upon agents agreeing solely upon a language of negotiation, while possibly adopting different negotiation policies, each corresponding to an agent variety. Agent dialogues can be connected within sequences, all aimed at achieving an individual agent’s goal. Sets of sequences aim at allowing all agents in the system to achieve their goals…”
1, Introduction
… Many approaches in the area of one-to-one negotiation are heuristic-based and, in spite of their experimentally proven effectiveness, they do not easily lend themselves to expressing theoretically provable properties. Other approaches present a good descriptive model, but fail to provide an execution model that can help to forecast the behaviour of any corresponding implemented system…
… Note that we do not make any concrete assumption on the internal structure of agents, except for requiring that they hold beliefs, goals, intentions and, possibly, resources.
2, Preliminaries
2.1, A performative or dialogue move is an instance of a schema tell(X, Y, Subject, T)… e.g. tell(a, b, request(give(nail)), 1)…
2.2, A language for negotiation L is a (possibly infinte) set of (possibly non ground) performatives. For a given L, we define two (possibly infinte) subsets of performatives, I(L) and F(L)…, called respectively initial moves and final moves. Each final move is either successful or unsuccessful.
2.3, An agent system is a finite set A, where each x in A is a ground term, representing the name of an agent, equipped with a knowledge base K(x).
3, Dialogues
3.4, Given an agent system A, equipped with a language for negotiation L, and an agent x in A, a dialogue constraint for x is a (possibly non-ground) if-then rule of the form: p(T) & C => p’(T + 1), where… The performative p(T) is referred to as the trigger, p’(T + 1) as the next move and C as the condition of the dialogue constraint.
3.5, A dialogue between two agents x and y is a set of ground performatives, {p0, p1, p2, …}, such that… A dialogue {p0, p1, … pM)… is terminated if pM is a ground final move…
3.6, A request dialogue wrt a resource R and an intention I of agent x is a dialogue… such that…
3.7, (Types of terminated request dialogues) Let I be the intention of some agent a, and R be a missing resource in I. Let d be a terminated resource dialogue wrt R and and I, and I’ be the intention resulting from d. Then, if missing(Rs), plan(P) are in I and missing(Rs’), plan(P’) are in I’:
i) d is successful if P’ = P, Rs’ = Rs \ {R};
ii) d is conditionally or c-successful if Rs’ /= Rs and Rs’ /= Rs \ {R};
iii) d is unsuccessful if I’ = I.
Note that, in the case of c-successful dialogues, typically, but not always, the agent’s plan will change (P’ /= P).
3.8, An agent x in A is convergent iff, for every terminated request dialogue of x, wrt some resource R and some intention I, the cost of the returned intention I’ is not higher than the cost of I. The cost of an intention can be defined as the number of missing resources in the intention.
4, Properties of Agent Programs
4.9, An agent x in A is deterministic iff, for each performative p(t) which is a ground instance of a schema in L(in), there exists at most one performative p’(t+1) which is a ground instance of a schema in L(out) such that ‘p(t) & C => p’(t+1)’ is in the agent program S and ‘K & p(t)’ entails C.
4.10, An agent program S is non-overlapping iff for each performative p which is a ground instance of a schema in L(in), for each C, C’in S(p) such that C /= C’, then C ^ C’ = false.
(Theorem 1) If the (grounded) agent program of x is non-overlapping, then x is deterministic.
4.11, An agent x in A is exhaustive iff, for each performative p(t) which is a ground instance of a schema in L(in) \ F(L), there exists at least one performative p’(t+1) which is a ground instance of a schema in L(out) such that ‘p(t) & C => p’(t+1)’ is in S and ‘K & p(t)’ entails C.
4.12, Let L(S) be the set of all (not necessarily ground) performatives p(T) that are triggers in dialogue constraints:
L(S) = {p(T) | there exists ‘p(t) & C => p’(t+1)’ in S}. (Obviously, L(S) is a subset of L(in)). Then, S /= {} is covering iff for every performative p which is a ground instance of a schema in L(in), the disjunction of C’s in S(p) is ‘true’ and L(S) = L(in) \ F(L).
(Theorem 2) If the (grounded) agent program of x is covering, then x is exhaustive.
5, Agent Varieties: Concrete Examples of Agent Programs…
6, Dialogue Sequences
6.13, A sequence of dialogues s(I) wrt an intention I of an agent x with goal(G) in I is an ordered set {d1, d2, …, dn, …}, associated with a sequence of intentions I1, I2, …, In+1, … such that…
6.14, A sequence of dialogues {d1, d2, …, dn} wrt an initial intention I of an agent x and associated with the sequence of intentions I1, I2, …, In+1 is terminated iff there exists no possible request dialogue wrt In+1 that x can start.
6.15, (Success of a dialogue sequence) A terminated sequence of dialogues {d1, d2, …, dn} wrt an initial intention I of an agent x and associated with the sequence of intentions I1, I2, …, In+1 is successful if In+1 has an empty set of missing resources; it is unsuccessful otherwise.
6.16, Given an initial intention I of agent x, containing a set of missing resources Rs, the agent dialogue cycle is the following…
(Theorem 3) Given an agent x in A, if x’s agent dialogue cycle returns ‘success’ then there exists a successful dialogue sequence wrt the initial intention I of x.
(Theorem 4) Given an agent x with intention I, and a successful dialogue sequence s(I) generated by x’s dialogue cycle, if x is convergent, then the number of dialogues in s(I) is bounded by m.|Rs|, where missing(Rs) is in I and |A \ {x}| = m, A being the set of agents in the system.
7, Using Dialogue Sequences for Resource Reallocation
7.17, (Resource reallocation problem – rrp)
Given an agent system A, with each agent x in A equipped with a knowledge base K(x) and an intention I(x),
- the rrp for an agent x in A is the problem of finding a knowledge base K’(x), and an intention I’(x) (for the same goal as I(x)) such that missing({}) is in I’(x).
- the rrp for the agent system A is the problem of solving the rrp for every agent in A.
A rrp is solved if the required (sets of) knowledge base(s) and intention(s) is (are) found.
(Theorem 5) (Correctness of the agent dialogue cycle wrt the rrp) Let A be the agent system, with the agent programs of all agents in A being convergent. If all agent dialogue cycles of all agents in A return ‘success’ then the rrp for the agent system is solved.
7.18, Let A be an agent system consisting of n agents. Let R(A) be the union of all resources held by all agents in A, and R(I(A)) be the union of all resources needed to make all agents’ initial intentions I(A) executable. A is weakly complete if, given that R(I(A)) is a subset of R(A), then there exist n successful dialogue sequences, one for each agent in A, such that the intentions I’(A) returned by the sequences have the same plans as I(A) and all have an empty set of missing resources.
8, Conclusions…
“… (The proposed solution) relies upon agents agreeing solely upon a language of negotiation, while possibly adopting different negotiation policies, each corresponding to an agent variety. Agent dialogues can be connected within sequences, all aimed at achieving an individual agent’s goal. Sets of sequences aim at allowing all agents in the system to achieve their goals…”
1, Introduction
… Many approaches in the area of one-to-one negotiation are heuristic-based and, in spite of their experimentally proven effectiveness, they do not easily lend themselves to expressing theoretically provable properties. Other approaches present a good descriptive model, but fail to provide an execution model that can help to forecast the behaviour of any corresponding implemented system…
… Note that we do not make any concrete assumption on the internal structure of agents, except for requiring that they hold beliefs, goals, intentions and, possibly, resources.
2, Preliminaries
2.1, A performative or dialogue move is an instance of a schema tell(X, Y, Subject, T)… e.g. tell(a, b, request(give(nail)), 1)…
2.2, A language for negotiation L is a (possibly infinte) set of (possibly non ground) performatives. For a given L, we define two (possibly infinte) subsets of performatives, I(L) and F(L)…, called respectively initial moves and final moves. Each final move is either successful or unsuccessful.
2.3, An agent system is a finite set A, where each x in A is a ground term, representing the name of an agent, equipped with a knowledge base K(x).
3, Dialogues
3.4, Given an agent system A, equipped with a language for negotiation L, and an agent x in A, a dialogue constraint for x is a (possibly non-ground) if-then rule of the form: p(T) & C => p’(T + 1), where… The performative p(T) is referred to as the trigger, p’(T + 1) as the next move and C as the condition of the dialogue constraint.
3.5, A dialogue between two agents x and y is a set of ground performatives, {p0, p1, p2, …}, such that… A dialogue {p0, p1, … pM)… is terminated if pM is a ground final move…
3.6, A request dialogue wrt a resource R and an intention I of agent x is a dialogue… such that…
3.7, (Types of terminated request dialogues) Let I be the intention of some agent a, and R be a missing resource in I. Let d be a terminated resource dialogue wrt R and and I, and I’ be the intention resulting from d. Then, if missing(Rs), plan(P) are in I and missing(Rs’), plan(P’) are in I’:
i) d is successful if P’ = P, Rs’ = Rs \ {R};
ii) d is conditionally or c-successful if Rs’ /= Rs and Rs’ /= Rs \ {R};
iii) d is unsuccessful if I’ = I.
Note that, in the case of c-successful dialogues, typically, but not always, the agent’s plan will change (P’ /= P).
3.8, An agent x in A is convergent iff, for every terminated request dialogue of x, wrt some resource R and some intention I, the cost of the returned intention I’ is not higher than the cost of I. The cost of an intention can be defined as the number of missing resources in the intention.
4, Properties of Agent Programs
4.9, An agent x in A is deterministic iff, for each performative p(t) which is a ground instance of a schema in L(in), there exists at most one performative p’(t+1) which is a ground instance of a schema in L(out) such that ‘p(t) & C => p’(t+1)’ is in the agent program S and ‘K & p(t)’ entails C.
4.10, An agent program S is non-overlapping iff for each performative p which is a ground instance of a schema in L(in), for each C, C’in S(p) such that C /= C’, then C ^ C’ = false.
(Theorem 1) If the (grounded) agent program of x is non-overlapping, then x is deterministic.
4.11, An agent x in A is exhaustive iff, for each performative p(t) which is a ground instance of a schema in L(in) \ F(L), there exists at least one performative p’(t+1) which is a ground instance of a schema in L(out) such that ‘p(t) & C => p’(t+1)’ is in S and ‘K & p(t)’ entails C.
4.12, Let L(S) be the set of all (not necessarily ground) performatives p(T) that are triggers in dialogue constraints:
L(S) = {p(T) | there exists ‘p(t) & C => p’(t+1)’ in S}. (Obviously, L(S) is a subset of L(in)). Then, S /= {} is covering iff for every performative p which is a ground instance of a schema in L(in), the disjunction of C’s in S(p) is ‘true’ and L(S) = L(in) \ F(L).
(Theorem 2) If the (grounded) agent program of x is covering, then x is exhaustive.
5, Agent Varieties: Concrete Examples of Agent Programs…
6, Dialogue Sequences
6.13, A sequence of dialogues s(I) wrt an intention I of an agent x with goal(G) in I is an ordered set {d1, d2, …, dn, …}, associated with a sequence of intentions I1, I2, …, In+1, … such that…
6.14, A sequence of dialogues {d1, d2, …, dn} wrt an initial intention I of an agent x and associated with the sequence of intentions I1, I2, …, In+1 is terminated iff there exists no possible request dialogue wrt In+1 that x can start.
6.15, (Success of a dialogue sequence) A terminated sequence of dialogues {d1, d2, …, dn} wrt an initial intention I of an agent x and associated with the sequence of intentions I1, I2, …, In+1 is successful if In+1 has an empty set of missing resources; it is unsuccessful otherwise.
6.16, Given an initial intention I of agent x, containing a set of missing resources Rs, the agent dialogue cycle is the following…
(Theorem 3) Given an agent x in A, if x’s agent dialogue cycle returns ‘success’ then there exists a successful dialogue sequence wrt the initial intention I of x.
(Theorem 4) Given an agent x with intention I, and a successful dialogue sequence s(I) generated by x’s dialogue cycle, if x is convergent, then the number of dialogues in s(I) is bounded by m.|Rs|, where missing(Rs) is in I and |A \ {x}| = m, A being the set of agents in the system.
7, Using Dialogue Sequences for Resource Reallocation
7.17, (Resource reallocation problem – rrp)
Given an agent system A, with each agent x in A equipped with a knowledge base K(x) and an intention I(x),
- the rrp for an agent x in A is the problem of finding a knowledge base K’(x), and an intention I’(x) (for the same goal as I(x)) such that missing({}) is in I’(x).
- the rrp for the agent system A is the problem of solving the rrp for every agent in A.
A rrp is solved if the required (sets of) knowledge base(s) and intention(s) is (are) found.
(Theorem 5) (Correctness of the agent dialogue cycle wrt the rrp) Let A be the agent system, with the agent programs of all agents in A being convergent. If all agent dialogue cycles of all agents in A return ‘success’ then the rrp for the agent system is solved.
7.18, Let A be an agent system consisting of n agents. Let R(A) be the union of all resources held by all agents in A, and R(I(A)) be the union of all resources needed to make all agents’ initial intentions I(A) executable. A is weakly complete if, given that R(I(A)) is a subset of R(A), then there exist n successful dialogue sequences, one for each agent in A, such that the intentions I’(A) returned by the sequences have the same plans as I(A) and all have an empty set of missing resources.
8, Conclusions…
Saturday, 31 March 2007
15.5-15.7, A Generative Inquiry Dialogue System
Notes taken from ‘A Generative Inquiry Dialogue System’ (2007), by Elizabeth Black and Anthony Hunter
5, Soundness and Completeness
5.1, The argument inquiry outcome of a dialogue is a function… if D is a well-formed argument inquiry dialogue with participants x1 and x2, then… (given D the outcome is the set of all arguments that can be constructed from the union of the commitment stores and whose claims are in the question store).
… The benchmark that we compare the outcome of the dialogue with is the set of arguments that can be constructed from the union of the two agents’ beliefs. So this benchmark is, in a sense, the ‘ideal situation’ where there are clearly no constraints on the sharing of beliefs.
5.2, Let D be a well-formed argument inquiry dialogue with participants x1 and x2. We say that D is sound if and only if… (when the outcome of the dialogue includes an argument, then that same argument can be constructed from the union of the two participating agents’ beliefs).
(Theorem 5.1) If D is a well-formed argument inquiry dialogue with participants x1 and x2, then D is sound. Proof: …
5.3, Let D be a well-formed argument inquiry dialogue with participants x1 and x2. We say that D is complete iff… (if the dialogue terminates at t and it is possible to construct an argument for a literal in the question store from the union of the two participating agents’ beliefs, then that argument will be in the outcome of the dialogue at t.)
(Theorem 5.2) If D is a well-formed argument inquiry dialogue with participants x1 and x2, then D is complete. Proof: …
6, Future work…
… We would like to allow more than two agents to take part in an argument inquiry dialogue…
We currently assume that an agent’s belief base does not change during a dialogue, and would like to consider the implication of dropping this assumption…
We would also like to further explore the benchmark which we compare our dialogue outcomes to…
… We would like to further investigate the utility of argument inquiry dialogues when embedded in dialogues of different types.
7, Conclusions…
5, Soundness and Completeness
5.1, The argument inquiry outcome of a dialogue is a function… if D is a well-formed argument inquiry dialogue with participants x1 and x2, then… (given D the outcome is the set of all arguments that can be constructed from the union of the commitment stores and whose claims are in the question store).
… The benchmark that we compare the outcome of the dialogue with is the set of arguments that can be constructed from the union of the two agents’ beliefs. So this benchmark is, in a sense, the ‘ideal situation’ where there are clearly no constraints on the sharing of beliefs.
5.2, Let D be a well-formed argument inquiry dialogue with participants x1 and x2. We say that D is sound if and only if… (when the outcome of the dialogue includes an argument, then that same argument can be constructed from the union of the two participating agents’ beliefs).
(Theorem 5.1) If D is a well-formed argument inquiry dialogue with participants x1 and x2, then D is sound. Proof: …
5.3, Let D be a well-formed argument inquiry dialogue with participants x1 and x2. We say that D is complete iff… (if the dialogue terminates at t and it is possible to construct an argument for a literal in the question store from the union of the two participating agents’ beliefs, then that argument will be in the outcome of the dialogue at t.)
(Theorem 5.2) If D is a well-formed argument inquiry dialogue with participants x1 and x2, then D is complete. Proof: …
6, Future work…
… We would like to allow more than two agents to take part in an argument inquiry dialogue…
We currently assume that an agent’s belief base does not change during a dialogue, and would like to consider the implication of dropping this assumption…
We would also like to further explore the benchmark which we compare our dialogue outcomes to…
… We would like to further investigate the utility of argument inquiry dialogues when embedded in dialogues of different types.
7, Conclusions…
15.4, A Generative Inquiry Dialogue System
Notes taken from ‘A Generative Inquiry Dialogue System’ (2007), by Elizabeth Black and Anthony Hunter
4, Dialogue System
The communicative acts in a dialogue are called moves. We assume that there are always exactly two agents (participants) taking part in a dialogue… Each participant takes its turn to make a move to the other participant…
A move in our system is of the form (Agent, Act, Content)… e.g. (x, ‘open’, belief), (x, ‘assert’, (support, conclusion)), (x, ‘close’, belief)…
4.1, A dialogue… is a sequence of moves of the form… (The first move of a dialogue… must always be an open move…, every move of the dialogue must be made to a participant of the dialogue…, and the agents take it in turns to receive moves…)
4.2, (Terminology that allows us to talk about the relationship between two dialogues: ‘Sub-dialogue of’…, ‘top-level dialogue’…, ‘top-dialogue of’…, and ‘extends’…)
4.3, Let D be a dialogue with participants x1 and x2 such that topic(D) = gamma. We say that m(s)… is a matched-closed for D iff m(s-1) = (P, close, gamma) and m(s) = (~P, close, gamma).
So a matched-close will terminate a dialogue D but only if D has not already terminated and any sub-dialogues that are embedded within D have already terminated.
4.4, Let D be a dialogue. D terminates at t iff the following conditions hold…
4.5, The current dialogue is… (the innermost dialogue that has not yet terminated).
We adopt the standard approach of associating a commitment store with each agent participating in a dialogue. A commitment store is a set of beliefs that the agent has asserted so far in the course of the dialogue. As commitment stores consist of things that the agent has already publicly declared, its contents are visible to the other agent participating in the dialogue. For this reason, when constructing an argument, an agent may make use of not only its own beliefs but also those from the other agent’s commitment store.
4.6, A commitment store associated with an agent… is a set of beliefs.
An agent’s commitment store grows monotonically over time. If an agent makes a move asserting an argument, every element of the support is added to the agent’s commitment store. This is the only time the commitment store is updated.
4.7, Commitment store update…
… when an argument inquiry dialogue with topic ‘alpha1 ^ … ^ alphaN -> phi’ is opened, a question store associated with that dialogue is created whose contents are {alpha1, …, alphaN}. Throughout the dialogue the participant agents will both try to provide arguments for the literals in the question store. This may lead them to open further nested argument inquiry dialogues that have a topic a rule consequent is a literal in the question store.
4.8… a question store… is a finite set of literals such that…
A protocol is a function that returns the set of moves that are legal for an agent to make at a particular point in a particular type of dialogue…
4.9, The argument inquiry protocol is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns the set of legal moves that the agent can make)…
(An agent can only assert an argument or open a rule for a conclusion that is in the current Question Store, and such that the move has not been made by either agent at an earlier timepoint in the dialogue.)
Note that it is straightforward to check conformance with the protocol as the protocol only refers to public elements of the dialogue.
… a specific strategy function… allows an agent to select exactly one legal move to make at each timepoint in an argument inquiry dialogue. A strategy is personal to an agent and the move that it returns depends on the agent’s private beliefs. The argument inquiry strategy states that if there are any legal moves that assert an argument that can be constructed by the agent (by means of its belief base and the other agent’s commitment store) then a single one of those moves is selected…, else if there are any legal open moves with a defeasible rule as their content that is in the agent’s beliefs then a single one of these moves is selected… If there are no such moves then a close move is made.
4.10… The function pickO returns the chosen open move…
4.11… The function pickA returns the chosen assert move…
4.12, The argument inquiry strategy is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns exactly one of the legal moves)…
Note that a top-level argument inquiry dialogue will always have a defeasible fact as its topic, but each of its sub-dialogues will always have a defeasible rule as its topic.
4.13, D is a well-formed argument inquiry dialogue iff… (D does not continue after it has terminated and is generated by the argument inquiry strategy).
… Note that we assume some higher-level planning component that guides the agent when deciding whether to enter into a dialogue, who this dialogue should be with and on what topic, i.e. that makes the decision to make the move m1.
4, Dialogue System
The communicative acts in a dialogue are called moves. We assume that there are always exactly two agents (participants) taking part in a dialogue… Each participant takes its turn to make a move to the other participant…
A move in our system is of the form (Agent, Act, Content)… e.g. (x, ‘open’, belief), (x, ‘assert’, (support, conclusion)), (x, ‘close’, belief)…
4.1, A dialogue… is a sequence of moves of the form… (The first move of a dialogue… must always be an open move…, every move of the dialogue must be made to a participant of the dialogue…, and the agents take it in turns to receive moves…)
4.2, (Terminology that allows us to talk about the relationship between two dialogues: ‘Sub-dialogue of’…, ‘top-level dialogue’…, ‘top-dialogue of’…, and ‘extends’…)
4.3, Let D be a dialogue with participants x1 and x2 such that topic(D) = gamma. We say that m(s)… is a matched-closed for D iff m(s-1) = (P, close, gamma) and m(s) = (~P, close, gamma).
So a matched-close will terminate a dialogue D but only if D has not already terminated and any sub-dialogues that are embedded within D have already terminated.
4.4, Let D be a dialogue. D terminates at t iff the following conditions hold…
4.5, The current dialogue is… (the innermost dialogue that has not yet terminated).
We adopt the standard approach of associating a commitment store with each agent participating in a dialogue. A commitment store is a set of beliefs that the agent has asserted so far in the course of the dialogue. As commitment stores consist of things that the agent has already publicly declared, its contents are visible to the other agent participating in the dialogue. For this reason, when constructing an argument, an agent may make use of not only its own beliefs but also those from the other agent’s commitment store.
4.6, A commitment store associated with an agent… is a set of beliefs.
An agent’s commitment store grows monotonically over time. If an agent makes a move asserting an argument, every element of the support is added to the agent’s commitment store. This is the only time the commitment store is updated.
4.7, Commitment store update…
… when an argument inquiry dialogue with topic ‘alpha1 ^ … ^ alphaN -> phi’ is opened, a question store associated with that dialogue is created whose contents are {alpha1, …, alphaN}. Throughout the dialogue the participant agents will both try to provide arguments for the literals in the question store. This may lead them to open further nested argument inquiry dialogues that have a topic a rule consequent is a literal in the question store.
4.8… a question store… is a finite set of literals such that…
A protocol is a function that returns the set of moves that are legal for an agent to make at a particular point in a particular type of dialogue…
4.9, The argument inquiry protocol is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns the set of legal moves that the agent can make)…
(An agent can only assert an argument or open a rule for a conclusion that is in the current Question Store, and such that the move has not been made by either agent at an earlier timepoint in the dialogue.)
Note that it is straightforward to check conformance with the protocol as the protocol only refers to public elements of the dialogue.
… a specific strategy function… allows an agent to select exactly one legal move to make at each timepoint in an argument inquiry dialogue. A strategy is personal to an agent and the move that it returns depends on the agent’s private beliefs. The argument inquiry strategy states that if there are any legal moves that assert an argument that can be constructed by the agent (by means of its belief base and the other agent’s commitment store) then a single one of those moves is selected…, else if there are any legal open moves with a defeasible rule as their content that is in the agent’s beliefs then a single one of these moves is selected… If there are no such moves then a close move is made.
4.10… The function pickO returns the chosen open move…
4.11… The function pickA returns the chosen assert move…
4.12, The argument inquiry strategy is a function (that takes the top-level dialogue that the agents are participating in and the identifier of the agent whose turn it is to move, and returns exactly one of the legal moves)…
Note that a top-level argument inquiry dialogue will always have a defeasible fact as its topic, but each of its sub-dialogues will always have a defeasible rule as its topic.
4.13, D is a well-formed argument inquiry dialogue iff… (D does not continue after it has terminated and is generated by the argument inquiry strategy).
… Note that we assume some higher-level planning component that guides the agent when deciding whether to enter into a dialogue, who this dialogue should be with and on what topic, i.e. that makes the decision to make the move m1.
15.1-15.3, A Generative Inquiry Dialogue System
Notes taken from ‘A Generative Inquiry Dialogue System’ (2007), by Elizabeth Black and Anthony Hunter
“… We focus on inquiry dialogues that allow two agents to share knowledge in order to construct an argument for a specific claim…”
1, Introduction
Dialogue games are now a common approach to defining communicative agent behaviour, especially when this behaviour is argumentation-based… Dialogue games are normally made up of a set of communicative acts called moves, a set of rules that state which moves it is legal to make at any point in a dialogue (the protocol), a set of rules that define the effect of making a move, and a set of rules that determine when a dialogue terminates. Most of the work so far has looked at modelling different types of dialogue in the Walton and Krabbe typology… here we provide a generative system.
... A key contribution of this work is that we not only provide a protocol for modelling inquiry dialogues but we also provide a specific strategy to be followed, making this system sufficient to also generate inquiry dialogues… and this allows us to consider soundness and completeness properties of our system.
2, Motivation
Our work has been motivated by the medical domain. Argumentation allows us to deal with the incomplete, inconsistent and uncertain knowledge that is characteristic of medical knowledge. There are often many different healthcare professionals involved in the care of a patient, each of whom has a particular type of specialised knowledge and who must cooperate in order to provide the best possible care for the patient…
Inquiry dialogues are a type of knowledge that would be of particular use in the healthcare domain, where it is often the case that people have distinct types of knowledge and so need to interact with others in order to have all the information necessary to make a decision…
… We compare the outcome of our dialogues with the outcome that would be arrived at by a single agent that has at its beliefs the union of both the agents participating in the dialogues beliefs. This is, in some sense, the ideal situation, where there are no constraints on the sharing of beliefs.
3, Knowledge Representation and Arguments
We adapt Garcia and Simari’s Defeasible Logic Programming (DeLP)… for representing each agent’s beliefs…
The presentation in this section differs slightly from that in (Garcia and Simari’s DeLP)… as (they) assume a set of strict rules, which we assume to be empty, and they assume facts to be non-defeasible. We assume that all knowledge is defeasible due to the nature of medical knowledge, which is constantly expanding…
3.1, A defeasible rule is denoted 'alpha1 ^ … ^ alphaN -> alpha0' where alphai is a literal… A defeasible fact is denoted 'alpha' where alpha is a literal. A belief is either a defeasible rule of a defeasible fact.
3.2, A belief base associated with an agent x is a finite set…
3.3… A defeasible derivation of (a literal) ‘alpha’ from (a set of beliefs)… is a finite sequence alpha1, alpha2, …, alphaN of literals such that alphaN is alpha and each literal… is in the sequence because…
3.4, An argument constructed from a set of, possibly inconsistent, beliefs… is a minimally consistent set from which the claim can be defeasibly derived…
“… We focus on inquiry dialogues that allow two agents to share knowledge in order to construct an argument for a specific claim…”
1, Introduction
Dialogue games are now a common approach to defining communicative agent behaviour, especially when this behaviour is argumentation-based… Dialogue games are normally made up of a set of communicative acts called moves, a set of rules that state which moves it is legal to make at any point in a dialogue (the protocol), a set of rules that define the effect of making a move, and a set of rules that determine when a dialogue terminates. Most of the work so far has looked at modelling different types of dialogue in the Walton and Krabbe typology… here we provide a generative system.
... A key contribution of this work is that we not only provide a protocol for modelling inquiry dialogues but we also provide a specific strategy to be followed, making this system sufficient to also generate inquiry dialogues… and this allows us to consider soundness and completeness properties of our system.
2, Motivation
Our work has been motivated by the medical domain. Argumentation allows us to deal with the incomplete, inconsistent and uncertain knowledge that is characteristic of medical knowledge. There are often many different healthcare professionals involved in the care of a patient, each of whom has a particular type of specialised knowledge and who must cooperate in order to provide the best possible care for the patient…
Inquiry dialogues are a type of knowledge that would be of particular use in the healthcare domain, where it is often the case that people have distinct types of knowledge and so need to interact with others in order to have all the information necessary to make a decision…
… We compare the outcome of our dialogues with the outcome that would be arrived at by a single agent that has at its beliefs the union of both the agents participating in the dialogues beliefs. This is, in some sense, the ideal situation, where there are no constraints on the sharing of beliefs.
3, Knowledge Representation and Arguments
We adapt Garcia and Simari’s Defeasible Logic Programming (DeLP)… for representing each agent’s beliefs…
The presentation in this section differs slightly from that in (Garcia and Simari’s DeLP)… as (they) assume a set of strict rules, which we assume to be empty, and they assume facts to be non-defeasible. We assume that all knowledge is defeasible due to the nature of medical knowledge, which is constantly expanding…
3.1, A defeasible rule is denoted 'alpha1 ^ … ^ alphaN -> alpha0' where alphai is a literal… A defeasible fact is denoted 'alpha' where alpha is a literal. A belief is either a defeasible rule of a defeasible fact.
3.2, A belief base associated with an agent x is a finite set…
3.3… A defeasible derivation of (a literal) ‘alpha’ from (a set of beliefs)… is a finite sequence alpha1, alpha2, …, alphaN of literals such that alphaN is alpha and each literal… is in the sequence because…
3.4, An argument constructed from a set of, possibly inconsistent, beliefs… is a minimally consistent set from which the claim can be defeasibly derived…
Thursday, 29 March 2007
What is an Inquiry Dialogue?
Paragraph taken from the introduction of 'A Generative Inquiry Dialogue System' by Elizabeth Black and Anthony Hunter (2007)
In this paper we focus on inquiry dialogues. Walton and Krabbe define an inquiry dialogue as arising from an initial situation of "general ignorance" and as having the main goal to achieve the "growth of knowledge and agreement". Each indiviual participating in an inquiry dialogue has the goal to "find a 'proof' or destroy one" ('Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning', page 66)... we have defined two different types of inquiry dialogue, each of which we believe fits the general definition:
- warrant inquiry dialogue - the 'proof' takes the form of a dialectical tree (essentially a tree with an argument at each node, whose arcs represent the counter-argument relation and that has at its root an argument whose claim is the topic of the dialogue).
- argument inquiry dialogue - the 'proof' takes the form of an argument for the topic of the dialogue.
Argument inquiry dialogues are commonly embedded in warrant inquiry dialogues. In this paper, we will focus only on argument inquiry dialogues.
In this paper we focus on inquiry dialogues. Walton and Krabbe define an inquiry dialogue as arising from an initial situation of "general ignorance" and as having the main goal to achieve the "growth of knowledge and agreement". Each indiviual participating in an inquiry dialogue has the goal to "find a 'proof' or destroy one" ('Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning', page 66)... we have defined two different types of inquiry dialogue, each of which we believe fits the general definition:
- warrant inquiry dialogue - the 'proof' takes the form of a dialectical tree (essentially a tree with an argument at each node, whose arcs represent the counter-argument relation and that has at its root an argument whose claim is the topic of the dialogue).
- argument inquiry dialogue - the 'proof' takes the form of an argument for the topic of the dialogue.
Argument inquiry dialogues are commonly embedded in warrant inquiry dialogues. In this paper, we will focus only on argument inquiry dialogues.
Subscribe to:
Posts (Atom)