Model-driven engineering (and its latest reincarnation low-code platforms) is saving us of countless hours of repetitive boilerplate coding tasks. But the software systems we need to build increase in complexity every day. Current software development projects face a growing demand for advanced features. Including support for new types of user interfaces (augmented reality, virtual reality, chat and voice interfaces,…), intelligent behaviour to be able to classify/predict/recommend information based on user’s input or the need to face new security and sustainability concerns, among many other new types of requirements.

This forces us to define new types of models and languages to be able to express all these requirements and features and new low-code platforms to be able to transform those models in running code. But this comes with a price, the software models themselves keep growing both in terms of size and in terms of the number of perspectives they need to cover. It is not just static and behavioral aspects. It’s also machine learning components, natural language interfaces, energy consumption,… and all the relationships among all these models. In what follows, I propose low-modeling as a solution. This was the core idea of my keynote talk at ICSOFT 2023. You can see the full video of the talk at the end of this post and read the guest opinion paper I wrote afterward here. 

Low-modeling definition

Same as low-code accelerated the coding aspects of a system, I argue that we need new low-modeling approaches to accelerate the modeling aspects of a system. The Forrester Report stated that low-code application platforms accelerate app delivery by dramatically reducing the amount of hand-coding required. Similarly, we define low-modeling as the set of strategies that accelerate the modeling of a software system by dramatically reducing the amount of hand-modeling required.

It’s the end of modeling as we know it and I feel fine – R.E.M. (kind of)

The goal of low-modeling techniques is to create initial versions of models (or more complete versions of existing ones) to be then validated and refined by modeling experts. The goal is NOT to replace the need for modeling but to let modelers focus on the more creative and key aspects of the modeling activity instead of wasting time on boilerplate modeling.

Often, a low-modeling platform will also follow a low-code approach to generate the final software code from the (semi)automatically generated models. Low-modeling can also improve the adoption of modeling in companies and organizations. It is well accepted that the adoption challenge is a complex sociotechnical problem. Now aggravated as teams become more multi-disciplinary, involving people with less technical perspective (e.g. experts in ethics to deal with the AI components) In this sense, the goal of low-modeling is not only to increase the productivity of developer teams, but also to contribute to the democratization of software development by going beyond what no-code and template-based techniques can offer.

Low-modeling platforms accelerate the modeling of a software system by dramatically reducing the amount of hand-modeling required. Click To Tweet

Low-modeling strategies

Similarly to low-code approaches where code is semi-automatically generated from “earlier” sources (i.e. models in that case), in a low-modeling approach we will see how models are generated also from other input sources, such as existing knowledge or (un)structured documents or even other models.

Note that many of the low-modeling techniques are not completely new but they will need to be adapted and extended to cover the new types of models required to specify today’s systems (e.g. its smart capabilities or new types of interfaces as exemplified in the next section).

As examples of techniques, I’d like to mention the following

Heuristic-based model generation

Convention over configuration and the use of heuristics can help us to create basic models for parts of the system from other existing models. A good example is the automatic generation of behavioural models (see for instance our own work on the generation of behavioural specifications) or user interface models from static models (similar to the scaffolding options offered by many programming frameworks such as Django’s admin interface). Even if most of these approaches are limited to the generation of simple models (e.g. CRUD-like behavior models), they still cover a large part of the total size of those models, enabling designers to focus on the most “interesting” aspects (the Pareto Rule for MDD also applies here).

Knowledge-based model enrichment

For many domains, there is plenty of structured knowledge already available. From simple thesaurus to general ontologies like Cyc. This knowledge can be used to enrich a partial model with alternative concepts related to those already present in the model (e.g. based on the distance, in the ontology hierarchy, between the existing models and potentially new ones). For some domains, more specific ontologies, targeting the knowledge of that particular domain, could exist and produce better results. An even more extreme approach can involve the derivation of the target model by pruning all the superfluous concepts (for the system at hand) from an initial ontology.

Knowledge can also come from previous models created as part of previous modeling projects in the same domain. Either by the same company or by others but contributed to a common model repository. As before, these previous models could be compared with the current one to suggest ways to enrich it.

Regardless of the specific method, the key idea is to reuse existing knowledge, already formalized by other individuals or whole communities, to speed up the creation of new models for the same domain. And not only that, this knowledge-reuse can also improve the overall quality. Differences between the model and the existing knowledge-bases could suggest errors in the model. These potential errors would then need to be revised by the expert so conclude whether the error is true or it is just that for this specific system we are deviating from more common specifications.

Machine-Learning based inference

Another group of techniques deal with the variety of ML techniques and applications that could help to infer models from unstructured sources. This ranges from the automatic derivation of models from the textual analysis of documents to the creation of modeling assistants (similar to what GitHub copilot offers to programmers) thanks to the use of Generative AI techniques.

As in all the other domains where AI is applied, these techniques end up being the most powerful ones (as they can extract models from completely unstructured sources and with the least human intervention) but at the same time the ones that pose the highest risk as there are no guarantees in the quality of the result. They may be the fastest way to get some results but they are also the most time-consuming during the review phase.

Note that the quality of the results largely depends on the datasets used during the ML training. It is then worth mentioning initiatives targeting the creation and curation of proper model datasets for machine learning, such as ModelSet.

Video of the talk

Want to build better software faster?

Want to build better software faster?

Read about the latest trends on software modeling and low-code development

You have Successfully Subscribed!

Pin It on Pinterest

Share This