The structure and behavior of a software system can be described by means of software models, using notations such as Alloy, graph-based formalisms or UML/OCL. These notations describe software systems at a high level of abstraction, hiding implementation details while preserving its salient features. Analyzing these models can reveal complex faults in the underlying systems.

In this analysis, the key assets for checking the correctness of software models are model finders, tools capable of computing instances of a model that satisfy a set of constraints and properties of interest. Each model finder targets a particular modeling notation and uses a different reasoning engine, like search-based methods, SAT, SMT or constraint programming.

For verification purposes, it is usually enough to search for one instance, which either proves or disproves the property of interest. However, for testing and validation purposes several instances are usually required to increase our confidence in the correctness of the model. It is highly desirable that those instances exhibit diversity, i.e., distinct configurations of the system and interesting corner cases. Lack of diversity may make validation and testing more time consuming, as the analysis includes almost-duplicate instances that do not provide added value; and less effective, as the sample of instances may fail to include relevant scenarios.

Nevertheless, most model finders focus on efficiency and expressiveness of the input modeling notation, so few of them ensure diversity of the generated instances. In these few, diversity assurance is integrated into the solver: it guides the search process to look for diverse instances. However, this integration makes it harder to transfer the proposed methods to other solvers and notations. Thus, designers are limited in terms of expressiveness (e.g., no support for integer or string attributes or dynamic properties) and cannot benefit from additional features provided by others model finders (e.g., computation of minimal instances or support for max-satisfiability).

We propose a method for distilling diverse instances in the model finder output based on the use of clustering. Instances are classified into categories according to their similarity, which is calculated using information about their structure (the existing objects and the links between them), typing (the specific type of each object) and attribute values. This calculation is based on the use of graph kernels (see other potential benefits of using graph kernels in a variety of model-driven engineering problems), a family of methods for computing distances among graphs. Selecting a representative instance from each category ensures diversity while reducing testing and validation time, as redundant instances can be safely discarded. As a drawback, this method does not force the model finder to look for diverse instances, it only distills from those generated the most diverse ones.

Compared with related works, our approach offers the following advantages:

  1. It is independent of the solver used by the model finder (SAT, SMT, . . . ) and the modeling notation being analyzed (Alloy, UML/OCL, . . . ).
  2. It does not require manual intervention from the designer to define what kind of instances are \relevant” or when two instances are \similar”.
  3. The similarity computation can be customized, e.g., by selecting a trade-off between precision and accuracy.

This work has been accepted at ABZ 2020 (7th International Conference on Rigorous State-Based Methods). You can read the full paper ABZ_2020_Model_diversity_in_model_finders or continue reading for a brief introduction to the method.

Overview of our diversity method

The overview of our approach for identifying diverse instances in model finder output is depicted in the featured image above. Our input is a set of instances computed by a model finder, and our output is a set of clusters grouping those instances according to their similarity. From this output, it is possible to select a representative instance for each cluster, e.g., choosing the smallest instance.

The method can be divided into three steps:

  1. Graph abstraction: First, each instance is abstracted as a labeled graph, where labels store type and attribute value information and the underlying graph captures the objects and the links among them.
  2. Graph kernel: Then, the pairwise similarity among the n graphs is computed using a state-of-the-art labeled graph comparison technique. The result of this computation is a nxn matrix S where each cell Sij provides information about the similarity between graphs i and j.
  3. Clustering: Finally, the similarity data is used by a clustering procedure to classify instances into groups of similar instances. The most suitable number of groups is determined by using clustering validity indices, which measure whether elements in the cluster are similar to each other and different from elements in other clusters.

Clustering Example

To illustrate how the method works and the type of results it can achieve, we will use the following UML class diagram (Fig 1. section a). This model describes the relationships between employees who work in or lead a department. There are two constraints regarding the salary, defined as OCL invariants: all salaries must be below a salary threshold and also below the salary of the department’s director.

To be usable in practice, this model should be strongly satisfi able: it should have some instance where all integrity constraints are satis fied with each class having a non-empty population. In our example, the class diagram is satisfi able and a potential solution is the instance shown in Fig 1. section b. Instances like this can then be used for validating and testing the UML/OCL model.

Graph kernels clustering example

Fig 1. Motivating example: (a) UML/OCL class diagram; (b) Sample instance; (c) Encoding of the instance as a labeled graph; (d) Graph shapes of the three clusters.

We have used the USE Model Validator to generate 25 valid instances for this model. By manually inspecting these instances, we can easily realize that most of them are very similar. A designer would be interested in a smaller and more diverse set of instances that gives the same or even more information as the 25 original ones. We explain next how this can be achieved with our method.

Applying our method, each object diagram is abstracted as a labeled graph. As an example, Figure 1(c) shows the abstraction for the object diagram in Figure 1(b). We then apply hierarchical clustering to our 25 graphs using the similarity information provided by a graph kernel algorithm. From the results, validity indices recommend choosing 3 clusters. Thus, we have discovered that out of the 25 instances, there are only 3 types of solutions worth considering. The common pattern in each cluster is depicted in Figure 1(d).

Notice that one cluster identified by our method (the middle one) highlights a potential problem in the model: a department where the director works in another department. This is a corner case worth studying, to decide whether it should actually be allowed or it is a mistake in the model that needs to be fixed.

Please refer to the full paper for the full details of each phase.

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Follow the latest news on software modeling and low-code development

You have Successfully Subscribed!

Pin It on Pinterest

Share This