Taken from 'Dialogue Games for Ontological Commitment' (2003), Robbert-Jan Beun and M. van Eijk
We give an example of a dialogue (somewhat adorned in natural language) that is generated by the rules [presented in the paper]:
A1: Is this a safe car?
A2's ontology defines the non-basic concept 'safety' in terms of 'having airbags' and 'having a good crash test'. According to this interpretation the car is indeed believed to be safe, but since A2 does not know the meaning is shared it responds... :
A2: Yes, it has air bags and a good crash test.
This response is pushed on the stack of A1. Agent A1, however, has a different view on 'safety of cars', and it manifests this discrepency by responding ... :
A1: To my opinion, a safe car would also have traction control.
Agent A2 now knows A1's interpretation of 'safety' ... and since it believes that this particular car does not have traction control it gives the following answer to the initial question ... :
A2: Mhm, if safety also amounts to having traction control then this car is not safe.
This response is pushed on the stack of A1. Agent A1 has received an acceptable answer to its question and ends the dialogue ... :
A1: OK, thank you.
Note that if in the second turn, A2 would not have manifested its interpretation of 'safety', the ontological discrepency would have remained unnoticed, possibly leading A1 to draw incorrect conclusions from the answer.
No comments:
Post a Comment