Wednesday 29 August 2007

Example Dialogue (Safe Car)

Taken from 'Dialogue Games for Ontological Commitment' (2003), Robbert-Jan Beun and M. van Eijk

We give an example of a dialogue (somewhat adorned in natural language) that is generated by the rules [presented in the paper]:

A1: Is this a safe car?

A2's ontology defines the non-basic concept 'safety' in terms of 'having airbags' and 'having a good crash test'. According to this interpretation the car is indeed believed to be safe, but since A2 does not know the meaning is shared it responds... :

A2: Yes, it has air bags and a good crash test.

This response is pushed on the stack of A1. Agent A1, however, has a different view on 'safety of cars', and it manifests this discrepency by responding ... :

A1: To my opinion, a safe car would also have traction control.

Agent A2 now knows A1's interpretation of 'safety' ... and since it believes that this particular car does not have traction control it gives the following answer to the initial question ... :

A2: Mhm, if safety also amounts to having traction control then this car is not safe.

This response is pushed on the stack of A1. Agent A1 has received an acceptable answer to its question and ends the dialogue ... :

A1: OK, thank you.

Note that if in the second turn, A2 would not have manifested its interpretation of 'safety', the ontological discrepency would have remained unnoticed, possibly leading A1 to draw incorrect conclusions from the answer.

Thursday 16 August 2007

Updates versus Revisions

Taken from 'Belief Revision' (1992) (page 183), Edited by Peter Gardenfors

... we make a fundamental distinction between two kinds of modifications to a knowledge base. The first one, update, consists of bringing the knowledge base up to date when the world described by it changes...

The second type of modification, revision, is used when we are obtaining new information about a static world...

Justifications versus Coherence Models

Taken from 'Belief Revision' (1992) (page 8), Edited by Peter Gardenfors

A question that has to be answered when modelling a state of belief is whether the justifications for the beliefs should be part of the model or not. With respect to this question there are two main approaches. One is the foundations theory which holds that one should keep track of the justifications for one's beliefs: Propositions that have no justification should not be accepted as beliefs. The other is the coherence theory which holds that one need not consider the pedigree of one's beliefs. The focus is instead on the logical structure of the beliefs - what matters is how a belief coheres with the other beliefs that are accepted in the present state.

It should be obvious that the foundations and the coherence theories have very different implications for what should count as rational changes of belief systems. According to the foundations theory, belief revision should consist, first, in giving up all beliefs that no longer have a satisfactory justification and, second, in adding new beliefs that have become justified. On the other hand, according to the coherence theory, the objectives are, first, to maintain consistency in the revised epistemic state and, second, to make minimal changes of the old state that guarantee sufficient overall coherence. Thus, the two theories of belief revision are based on conflicting ideas of what constitutes rational changes of belief. The choice of underlying theory is, of course, also crucial for how a computer scientist will attack the problem of implementing a belief revision system.

-----

Taken from 'Automating Belief Revision for AgentSpeak' (2006), Natasha Alechina et al.

AGM style belief revision is sometimes referred to as coherence approach to belief revision, because it is based on the ideas of coherence and information economy. It requires that the changes to the agent's belief state caused by a revision be as small as possible. In particular, if the agent has to give up a belief in A, it does not give up believing in things for which A was the sole justification, so long as they are consistent with the remaining beliefs.

Another strand of theoretical work in belief revision is the foundational, or reason-maintenance style approach to belief revision. Reason-maintenance style belief revision is concerned with tracking dependencies between beliefs. Each belief has a set of justifications, and the reasons for holding a belief can be traced back through these justifications to a set of foundational beliefs. When a belief must be given up, sufficient foundational beliefs have to be withdrawn to render the belief underivable. Moreover, if all the justifications for a belief are withdrawn, then that belief itself should no longer be held. Most implementations of reason-maintenance style belief revision are incomplete in the logical sense, but tractable.

Three Kinds of Belief Changes

Taken from 'Belief Revision' (1992) (page 3), Edited by Peter Gardenfors

A belief revision occurs when a new piece of information that is
inconsistent with the present belief system (or database) is added to that system in such a way that the result is a new consistent belief system. But this is not the only kind of change that can occur in a belief system. Depending on how beliefs are represented and what kinds of inputs are accepted, different typologies of belief changes are possible.

In the most common case, when beliefs are represented by sentences in some code, and when a belief is either accepted or rejected in a belief system (so that no degrees of belief are considered), one can distinguish three main kinds of belief changes:

(i) Expansion: A new sentence is added to a belief system together with the logical consequences of the addition (regardless of whether the larger set so formed is consistent).

(ii) Revision: A new sentence that is inconsistent with a belief system is added, but, in order to maintain consistency in the resulting belief system, some of the old sentences are deleted.

(iii) Contraction: Some sentence in the belief system is retracted without adding any new facts. In order for the resulting system to be closed under logical consequences some other sentences must be given up.