I see some confusion on the relationship between these three concepts: executable models, code-generation and model interpretation. This was true when I first wrote this post (in 2010). Still true today (in 2023!).

An executable model is a model complete enough to be executable. I’m fully aware that this definition is ambiguos (see the interesting discussion we had in LinkedIn about this issue) but the truth is that the executability of a model depends more on the execution tool we use than on the model itself (e.g. some tools may require a more complete and precise model specification while others may be able to “fill the gaps” and execute more incomplete models).

The most well-known family of executable models are the Executable UML methods (see also the most classic book on this topic). Executable UML models make extensive use of an Action language (kind of imperative pseudocode, see for instance the Alf action language ) to precisely define the behaviour of all class methods. The OMG itself has standardized this notion of Executable UML models, more specifically, the Semantics of a Foundational Subset for Executable UML Models (FUML). Surprisingly enough, the current version of this standard does not include either a definition of what an Executable Model is.

Code-generation and model interpretation are then two different alternative strategies to “implement” execution tools.

The code-generation strategy involves using a model compiler (many times defined as a model-to-text transformation) to generate a lower-level representation of the model expressed with an existing programming languages and platforms (e.g. Java). Then the generated code can executed as any other program in that target language. Instead, the model interpretation strategy relies on the existence of a virtual machine able to directly read and run the model ( example of a proposal for a UML virtual machine).

Initially, most MDD approaches opted for the code-generation strategy as it is a more straightforward approach. But now most vendors are going for the model interpretation strategy,  especially in the low-code domain, where offering this more “easy-to-use-and-deploy” strategy for your models is one of their selling point.

Taking a bird’s-eye view, the benefits and drawbacks of each approach are clear and Johan den Haan already described them for me.

However, often there is no such clear-cut distinction between the two approaches. This is similar to what happens in the programming world with the discussion around compiled and interpreted languages. We see more and more how for the same language we have both options including combinations of them.

For instance, I could imagine a modeling virtual machine implemented using an internal on the fly code generation approach. So, even if the user is not aware of this internal compilation, the virtual machine would be internally “cheating” and disguising itself as a wrapper on top of a code-generation strategy.

In this PhD Thesis work, Evolution of low-code platforms, Michiel Overeem includes the following figure that does a fantastic job in illustrating more mixed alternatives.

The first two approaches are the “pure” ones commented above. It then covers also two mixed scenarios. In the first mixed one (“simplification”), the model is simplified before being passed over to the virtual machine. This allows us to benefit from an interpretation approach while, at the same time, reducing the cost of building the infrastructure. In the second (“mix-and-match”), parts of the model are generated and others are interpreted.

As hinted in my mixed scenario above, there are even other ways to combine the approaches based on your goals and resources. Let me give you one final example I’ve also seen in practice: a code-generator approach but where the code was heavily depending on a library provided by the same vendor and had no use without that library. In this case, the library is a kind of core component (you could even call it a virtual machine) for the generated code, which more than “real code” is a kind of “configuration code” for the virtual machine offering the core behaviour.

As usual, the key point is to have in mind the pros and cons of each global approach and then decide how to best combine those strengths depending on the constraints of your specific scenario.

Want to build better software faster?

Want to build better software faster?

Read about the latest trends on software modeling and low-code development

You have Successfully Subscribed!

Pin It on Pinterest

Share This