Is there a future for Model Transformation Languages? To be honest, I’m not sure. And I think that this concern is shared by other members of the model transformation community. But of course, maybe we are plain wrong.
I think we can all agree that model transformations and manipulations are a key element in any model-driven engineering approach. The “traditional” way to tackle model transformation problems is to write a transformation program using a specific transformation language (such as ATL, QVT, ETL, …). But my feeling is that this traditional strategy seems to lead us nowhere. On the one hand, I know several companies that prefer to write transformations directly in general languages like Java. On the other hand, semi-automatic approaches (AI-based, transformation-by-example methods,..) could enable users to generate transformations without actually writing them.
I think this is an interesting and relevant topic to discuss. That’s why we organized (together with Loli Burgueño and Sébastien Gérard) an open discussion* at the next ICMT 2019 conference to discuss altogether whether there is still a future for Transformation Languages. If not, what will replace them?. If yes, how can they remain relevant?. This discussion presented the results of a survey that was answered by over 60 people (thanks a lot!). See the slides below for some interesting graphics on the usage and opinions about model transformation languages.
Is there a future for Model Transformation Languages? Survey results here: https://www.slideshare.net/jcabot/is-there-a-future-for-model-transformation-languages Share on XA summary of the survey results are collected in this presentation
During the conference session we collected more feedback and the discussion also continued online. All this input has helped us a lot to interpret, contextualize and expand on them.
Thanks to all this community effortm we have now been able to release the final results of this empirical/community evaluation of the Model Transformation Languages field health and future perspectives. You can freely read it in this paper: The Future of Model Transformation Languages: An Open Community Discussion published in the JOT Journal.
Looking at the results of our study, we can conclude that there is an agreement on the fact that model transformation languages are becoming less popular but will remain being used in niches where their benefits can be more easily demonstrated. In this sense, probably MTLs are following the typical journey through the hype cycle. After the “peak of inflated expectations” we are now climbing the “slope of enlightenment”.
There is also an agreement on the fact that negative results for MTLs are not (only) a technical issue but mostly due to social and tool aspects (i.e., knowledge and acceptance of MDE, lack of support and maintenance of MTLs, etc.) and due to the improvements in GPLs themselves that have integrated some of the ideas and programming constructs that years ago were only present in MTLs.
We can also conclude that new approaches such as search-based model transformations are not considered as an alternative to be used in practice for now. Probably because they are still mere research prototypes. We believe any academic or practitioner interested in the field of model transformations can get some interesting insights from this work.
Moreover, we also believe that this “exercise” was well appreciated by the community that felt it was important to have a collective discussion on these key topics. We hope this or similar discussions continue in the future including also quantitative evaluations. For instance, one of the suggestions from the open discussion was to replicate the experiments in over alternative scenarios, e.g. one where traceability is important. Most MTLs are very good at keeping transformation traces and, therefore, these additional experiments could highlight use cases where MTLs are still clearly advantageous
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
Indeed a very good question. I’m one of the organizers of the Transformation Tool Contests and I have seen quite a drop in the number of case proposals and solutions over the last few years. Of course there are many reasons to that (e.g. not a big-enough incentive for a submission to start with) but I guess it also hints on a more general decline in interest in the model transformation languages and tooling. Is it because these tools are mature enough and widely used or people indeed prefer general-purpose languages (perhaps with internal DSLs) for writing their transformations, or something else?
Looking forward for the results!
I took this survey, even though it clearly isn’t quite targeting my usage of model transformation, an xtUML model compiler.
I commented that the use of MTL technology suffers (or maybe benefits from) the same lack of measurement that afflicts all software technologies. Right now, with software being a measurement adverse culture, you can’t objectively show that one technology is better than another.
This is the biggest issue in software, that probably costs the industry more money than anything else.
True, indeed. The current culture treats development as a black box activity (by design), encapsulating direct technology measurements. Which isn’t necessarily a bad thing, since business benefits are what counts at the end of the day.
Respectively, externally visible and somehow business related indicators are measured: user interface experience/success with top priority, secondly key indicators.
What got lost is the traceability of technology decisions to these external indicators, there is not even a role in the setups that could take responsibility.
But most astonishingly, it often seems of little interest that costs are high.
I think much of the appeal in MDE is lost due to terminology. If we substitute the word “model” (which is already an overloaded term, meaning different things to different people in different domains) with “structured data”, suddenly much of the research becomes relevant and appealing in a much broader range of real-world applications.
As a PhD student, when I try to tell people that I work on improving the performance of model management programs, they’re always confused as to what that means. But if I talk in terms of big data processing, it’s much more understandable.
I’ve never used a model-to-model transformation program, despite being an MDE researcher. Yet most research is on M2M and very little on other model management tasks. Yet when I think about it, M2M is actually very relevant.
Suppose I use a distributed processing framework like Apache Flink with a CSV file as input (source) and XML file as the output (sink). The intermediate processing logic might just be a “map” function. What if we thought of that function as a model-to-model transformation? Now all the optimisations the MDE community has been working on finds its parallel (pardon the pun) in “everyday” terminology.
In my view, models provide a way to structure data and work with them at a higher level of abstraction. The way we talk about models in the MDE community is far too focused on, say, UML diagrams and not enough on auto-generated models from data. When we talk about scalability in MDE, it’s not about making 1 KB UML diagrams load faster, it’s really about big data processing.
But, what do I know. I’m just a student.
Indeed. We need a broader view when talking about models, transformations,… I recommend you to take a look at as well at this previous post on Big Data and modeling
Models often have behavior as well as just data structures.
“Domain” is also an overloaded term these days. When Shlaer-Mellor/xtUML modelers say, “domain”, they refer to a single subject matter in a system. Many other uses of domain seem to refer to a whole system.
@Terminology: I do agree much. The term “model” is not just overloaded, but also bears a heavy weight from the past.
@Big/StructuredData: The observation is also true, indeed. I think at it’s core is not so much the fact of being “big”, but the (highly) distributed, uncertain and volatile contexts within which the data is handled.
Refering to the bullet list in Jordis linked post above, the two last but one issues (interoperability, personal models) seem to me the most important ones here.
Stated simply, the world has split up into myriads of independent actors, each one acting upon it’s own individual view of the world, it’s internal models, which need to be mapped to the outside world.
This concerns larger entities like corporate service offerings, Web APIs of all kind, but also internal APIs from REST-based microservice couplings down to small code pieces linked together within the same executable.
It also concerns transfered data indirectly, since behind each data transformation there are implicit models, of which quality whatsoever. Even if these are difficult to capture, difficult to maintain, or everchanging by nature, it does not mean they do not exist.
With machine learning the scope is expanded from plain transfer into analysis, interpretation, understanding, and while the latter tasks arguably are more complex than plain model transformations, the former data preparation steps are of course related.
Therefore, I’d also agree that there is benefit in applying “model transformations” to approach these tasks more systematically and thereby more efficiently.
The choice of technology is a different matter, and there are pros and cons. In the survey, I mentioned our multi-purpose “OCP” technology (see e.g. xocp.org), which can be used as an in memory Java object model transformation/projection tool, and is applied to many tasks in our software, one of which are traditional M2Ms.
Hi together,
as a remark of someone doing MBSE/MDSE/MDA-consulting for some 15 years.
In my opinion introducing modeling technics with the strategic objective to remain at MBSE level is pain in vain. The only benefit of an organization to use MBSE is to have a common graphical representation for drawings. I wrote drawings by intention, because the result is a bunch of drawings and not a model.
Starting with MDSE artifacts get generated. To perform generation of artifacts a model is required. That model has to conform to modeling rules. That introduces a significant higher level of formalism. The modeling team members are now forced to follow modeling rules. That introduces even more pain, but also opens new possibilities for much higher benefits. One such benefit is, a formal model can be syntactically validated automatically (e.g. by a finite state machine). That is the first time model transformations come into play. A model validator transforms a model into a validation report. An issue I have with that kind of transformation is — at least using MOFM2T (which is QVT based) — I have to use negated logic. I have to check wether some patterns exist that are not covered by the white list of allowed patterns. As an example the modeling rules only allow classes with a given stereotype to contain properties typed using UML standard types. Then I have to check if a property in such a class exists that is typed neither using UML::Boolean nor UML::Integer nor etc. I mostly deal with that issue by including the validation into the model document generation. Doing this the validation is done by the else-cases 😉
I so far have not been member of a ‘pure MDA’ project. I interpret ‘pure MDA’ as the model is the single source of truth implying all artifacts are generated out of the model.
I actually — in my free time — experience with augmenting/enriching (base) models with additional models adding new aspects. As example, given a base model specifying some safety critical technical system. In this case performing a FMEA and/or FTA is recommended. Putting all the FMEA/FTA aspects into the base model results in an overly complex model hard to understand. So the approach I actually play with that the FMEA and/or FTA model references the base model (e.g. using the UML PackageImport mechanism) and adds the FMEA respectively the FTA aspects. I mention this, because in this case I prefer referencing over transforming.
/Carsten
Hi again,
sorry for answering to my own post 😉
But writing the previous post had somehow the side effect, that I rethought my approach augmenting/enriching (base) models.
Maybe it is much more convinient to first transform the base model into an augmenting/enriching model. Doing this the transformation can insert the references from the augmenting/enriching elements to the base augmented/enriched elements of the base model.
In a second step, the parameter values can be inserted into the augmenting/enriching model.
Given that to answer the question: “Is there a future for Model Transformation Languages?” I would like to say YES.
/Carsten
I think your sentence about not having experienced a pure MDA project is important here. I guess many MDE techniques lose value if they need to be used in a mixed environment
MDE techniques only lose value in a mixed environment, if they can’t produce a standalone result. My use of xtUML/Shlaer-Mellor in mixed environments has been to model a single subject matter domain, which is a reuse element, so it survives the project architect and is carried over to future projects.
Teams utilizing xtUML/Shlaer-Mellor mostly know what they are doing and why they do it.
That is mostly not the case with the people I deal with. Somebody told them they have to use UML/SysML. Mainly this is the case because ISO61508 and derivates mention UML/SysML as semi-formal method recommended or even highly recommended to use at a given SIL (https://en.wikipedia.org/wiki/Safety_integrity_level).
These people never bothered about modeling before. …
It is a completely different world 😉
/Carsten
FWIW, I’ve not worked with xtUML teams. I’ve been the lone practitioner carving out pieces of systems to work with while unsuccessfully trying to win converts. 🙂
It is simply a business case that compares invest of time & ressources to benefits respecting the constraints time to deliver & skills available.
Given that low hanging fruits get harvested only.
Automated model validation and documentation are low hanging fruits which provide sufficient benefit. In some cases also source code skeletons get generated.
Another interesting case might be that FMEA/FTA thing I mentioned in a prior post. Many RAMS (https://en.wikipedia.org/wiki/RAMS) engineers I talked with would highly appreciate it. But the business case of this capability is much more difficult to sell.
/Carsten
This isn’t specifically on-topic, but it may be relevant.
About 20 years ago an employer of mine tried using the old ObjecTime tool to develop embedded software for a telephone switch. The tool converted a graphical modeling language (ROOM), similar to UML, into C++ code. I personally had very good success with it, but few others on the team of about 20 people shared my enthusiasm.
The biggest problem was training and adapting the development process to employ the tool. The other team members were unfamiliar with object-oriented methods, software modeling in general, programming to interfaces, writing requirements, making round-trip engineering work, and debugging the models. People would say “The only person who understands how this works is Steve, and we don’t have time to mess with it”.
Training and retaining trained personnel was a big issue then and I think that is true today. My current employer resists investment in training and tools even for in-house projects, viewing it as an unnecessary expense. Compounding the problem is the fact that we often hire outside contracting firms for specific product enhancements or new feature development. Retaining trained personnel at an overseas third-party supplier is completely out of our control. This has proved to be difficult even with GPLs, let alone with specific modeling methods and transformation languages.
– Steve Hanka