In her keynote talk at ECMFA’14 , Marsha Chechik talked about explicating and reasoning with model uncertainty which is a problem every single modeler faces when trying to model a domain.
As she says, “Model uncertainty can be introduced into the modeling process in many ways: alternative ways to model inconsistencies, different design alternatives, modeler’s knowledge about the problem domain, multiple stakeholder opinions, etc. Instead of waiting until uncertainty is resolved or forcing premature design decisions, we propose to defer the resolution of uncertainty for as long as necessary, while supporting a variety of transformation and reasoning operations that allow modelers to live with this uncertainty … Our specification of models with uncertainty implicitly encodes a set of alternative possible models, where we are not sure which is the correct one”
To know more about her ideas, you can read her extended abstract or (better), sit down and enjoy the slides she used in her talk:
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
I am always glad to find works on the “soft issues” of modelling. In particular, vagueness in models is of great interest to us. I have browsed through Chechik’s slides, and also checked out Aughenbaugh’s classification (see slide 11 here), and I am not convinced that this is a good conceptual framework for vagueness in models. I will explain briefly.
I fully agree that, in some cases, vagueness resides in the knowledge, and we call this epistemic vagueness, uncertainty or inaccuracy. This is the case, for example, of me estimating someone’s age as “roughly 55”; she may be 53 but I make a small mistake and my mental model is inaccurate.
According to our view, there is a second kind of vagueness, which is ontological, i.e. it is inherent to the world rather than depending on our knowledge about it. We call it ontological vagueness or imprecision. This is the case, for example, of the surface area of a city. Since a city has no clearly and precisely defined boundaries, then there is no way to obtain a clear figure that gives us its area. It’s not about our knowledge, it’s about the nature of the world. I disagree with the slides and Aughenbaugh’s model in that this ontological vagueness is not necessarily related to randomness; the city example proves it.
Then there is the issue of reducibility. What do we mean by that? By definition, ontological vagueness is inherent in the world, and therefore there is no “true precise answer”, so it makes no sense to “reduce” vagueness in this case. Epistemic vagueness, on the other hand, can always be reduced in theory, since by definition a precise, true value always exists, regardless of our knowledge about it, so the answer to the question of reducibility is always “yes”. From my point of view, then, reducibility is given by the kind of vagueness we are dealing with, rather than being an orthogonal dimension.
We could think of “pragmatic reducibility” as an alternative, i.e. whether we can make up a precise value for ontologically vague properties when needed. This is practical and useful in many scenarios; for example, land management systems do enter a figure under “city area”, after all. But this does not mean that this figure is the true value for the city area, as any other figure that is close enough would be as valid.
Finally, there are additional issues that strongly affect uncertainty and that I also think should be considered when creating models, such as subjectivity (different opinions) and temporality (pass of time changing things). Subjectivity is mentioned in the slides but not supported by the toolkit as far as I can see; temporality is neither.
Vagueness is a complex and difficult topic. Thanks for posting this, Jordi!