During the second half of 2011 and the first half of 2012, Jordi and myself (Carlos González) conducted an analysis on the field of static model verification, with the intent of shedding some light on what was the current landscape on the area and, if possible, trying to discover new challenges and/or weaknesses that could help to steer future research efforts.

This analysis, that took the form of a systematic literature review (for more on this read [1] or [2]), has been recently published in the Elsevier Information and Software Technology Journal under the title “Formal Verification of Static Software Models in MDE: A Systematic Review” (free preprint version) . This blog post just summarizes what is written there, focusing on what we think is the most interesting part: the findings. In any case, feel free to write us a comment asking for more details if necessary. Let’s get started.

As typical of systematic reviews, our study started by posing a series of research questions. This is paramount, since answering these questions is the driving force behind this type of analysis, and have a significant influence on how the different stages are addressed. Our research questions were:

1.- What are the typical phases in model verification approaches?

2.- What are the formal methods and techniques employed in the verification approaches?

3.- What are the types of static models verified in each case?

4.- Up to what extent is OCL supported?

5.- Which correctness properties are verified in each case?

6.- Are the existing approaches automatic and complete?

7.- Are the existing approaches supported by a verification tool?

8.- What type of feedback is obtained from the verification process in each case?

 

Identification and Retrieval of Relevant Works

With the research questions in place, the next step was to design a systematic procedure to identify as many relevant works in the area of interest as possible.  Normally, this implies doing a number of things, such as: defining a certain criteria to determine what a relevant work is (papers devoted to static model verification published since 2002); deciding where to look for these works (popular repositories like DBLP, IEEE Xplore, Web of Knowledge and some others); and implementing a mechanism to retrieve them (querying the repositories, and systematically prune the records found to discard non relevant works).

When we conducted the retrieval process, we obtained a set of 48 relevant works, but because of the existence of clear relationships among some of them, we decided to group them into a set of 18 coarse-grained studies. These 18 studies were the focus of the rest of our work.

Answering the Research Questions

After carefully reading the relevant works found and running the available verification tools, we found ourselves ready to answer the research questions. Here is what we found:

1.- What are the typical phases in model verification approaches?

Verification approaches go through a series of stages. Normally, it all begins with a formalization stage where the verification problem is translated into some kind of formalism. This formalism, is then exploited during a subsequent reasoning process to determine whether the model under analysis is correct. Not all the studies include a reasoning stage. Only 11 do, 9 of which are also supported by the presence of a verification tool.

 

2.- What are the formal methods and techniques employed in the verification approaches?

The most typical formalism employed is some sort of logic. Up to 11 studies use First Order Logic, Description Logics, or some other type of logic representation. 2 studies encode the verification problem as a Constraint Satisfaction Problem (CSP). 2 studies use specification languages (B and Object-Z). In the rest of cases, some other kind of mathematical notation is employed.

 

3.- What are the types of static models verified in each case?

All the studies cover the verification of UML class diagrams. The verification of domain specific modeling languages was also covered by one of the studies.

 

4.- Up to what extent is OCL supported?

Up to 8 of the studies accompanied by a verification tool support OCL in its full generality. On the other hand, there are some studies that do not support OCL at all, but they only focus on the formalization stage. There are also several studies that support only a subset of OCL.

 

5.- Which correctness properties are verified in each case?

The correctness property most commonly checked is called satisfiability. A model is considered satisfiable if it is possible to create at least one valid non-empty instance. The analysis of this property is covered in 11 studies. Other popular correctness properties have to do with the relationship among the constraints in the model, like for example, checking the presence of redundant constraints.

 

6.- Are the existing approaches automatic and complete?

Automation and completeness of the reasoning stage are closely related to the degree of support of OCL.  Among the studies covering the reasoning stage, those supporting only a subset of OCL are automatic and complete. That’s not the case for studies supporting OCL in its full generality, though. Among them, 2 studies propose approaches that are complete, but not automatic (i.e. they need help from the user to steer the verification process). However, the strategy most commonly followed (5 studies) consists in applying a bounded verification approach; that is, limiting the search space where to look for a solution of the problem. In these cases, the reasoning process is automatic, but not complete (results are only conclusive if the model is found to be correct).

 

7.- Are the existing approaches supported by a verification tool?

The studies covering the reasoning stage are usually accompanied by a tool, although not in all the cases they were available for download.  Some tools are HOL-OCL, UMLtoCSP, EMFtoCSP, UML2Alloy or USE.

 

8.- What type of feedback is obtained from the verification process in each case?

Tools normally provide feedback of the “Yes/No” type, complemented with a valid instance when the model is found to be correct (the “Yes” case).

 

Areas of Improvement

Reading the relevant works not only gave us insight enough to answer the research questions, but also allows us to discover some aspects that, in our opinion, need improvement.

 

  • Terminology problems

There is no precise and rigorous terminology shared among all the studies analyzed. Obviously, this makes difficult contextualizing works on this field. In particular, there is a grey zone when it comes to establishing boundaries between model verification approaches and those ones devoted to checking model consistency.

Unambiguous definitions for the terminology around the concept of static model verification ( including terms like well-formedness, verification, validation, consistency, etc.) are clearly needed.

 

  • Lack of consistency at the time of naming correctness properties

Terminology issues also affect the way correctness properties are named and defined.  One example of this is “satisfiability”, the most popular correctness property. In the works analyzed, there are 6 different ways to make reference to this property, each one with its own meaning. In some cases, it is even possible to find more than one “flavor” of satisfiability coexisting in the same study. To make matters worse, some correctness properties can be expressed in terms of  others that are, let’s say, more “core” ones.

A catalog of correctness properties with precise and unambiguous names and meanings, as well as a clear description of how they are related to each other, could be of great help to alleviate this problem.

 

  • Lack of Adequate Tools

The existing tools on the field, apart from being very limited in number, lack certain characteristics that prevents a major adoption. To begin with, these tools have been designed to conduct verification as a separate process, and therefore do not integrate well with other tools typically used by model designers. The feedback when a model is found to be correct is acceptable, but clearly insufficient in the other case. Users may need help to know where to apply fixes, especially when dealing with large models. Just saying that the model is not correct does not seem to be enough. Finally, performance is a major issue. These tools tend to behave well with little models or toy examples, but performance drops sharply when dealing with large or real-life models.

 

  • Difficulties to compare existing tools

Another issue that has to do with the existing tools, but that also has its roots in the terminology issues mentioned above,  is the difficulty to compare them. There are no benchmarks to evaluate their performance and, in many cases, it is difficult to find out exactly what are the modeling constructs supported. The examples accompanying these tools are clearly insufficient for this and it is not always easy to adapt them for their use with a different tool. In some cases, it is not even clear how to verify the same correctness property with two different tools.

The existence of a standard set of benchmarks, based on the catalog of correctness properties proposed above, could facilitate the analysis and comparison of these tools and even enhance their development, since they could even be used for testing purposes.

 

To finish, we would like to express our hope that this analysis helps the community to better understand the current situation regarding the formal verification of static software models, and what is needed to move forward.

Want to build better software faster?

Want to build better software faster?

Read about the latest trends on software modeling and low-code development

You have Successfully Subscribed!

Pin It on Pinterest

Share This