Today, Christian Merenda writes a guest post to share his thoughts about model-driven engineering and to present OOMEGA , an open source model-driven engineering (MDE) platform. Enjoy the post!
Model-driven engineering is a consequential evolution in computer sciences. Traditionally only a few people designed programming languages like JavaTM and these general purpose languages were used by loads of programmers. MDE tools, in contrast, equip every single engineer with the ability to instantly create the required domain-specific language. The challenge for MDE tools like OOMEGA is to offer a very economic way to produce. Otherwise the extra cost of language design outweighs the benefit of a custom modelling language that perfectly fits your needs. The ultimate goal is to make language design affordable to everyone in terms of know-how, time and effort.
What is OOMEGA?
OOMEGA is an open source product that provides a model-driven engineering workbench based on the Eclipse platform. It takes only a few minutes to invent and produce a domain-specific language.
Here are some characteristics of the platform:
- There is a clear separation of abstract syntaxes and concrete syntaxes.
- You may define many concrete syntaxes for one and the same abstract syntax.
- You can edit your models with text-, graphical- and form-based editors in parallel.
- Text editors with syntax highlighting, linking and incremental parsing are instantly provided. No programming effort required!
- M2M- and M2T-transformations are supported by featuring ATL and Xpand.
- You can store your models in various relational and object databases.
- Models can be shared with a standard CVS or SVN repository. Your text syntax will be utilised for that purpose.
See a screenshot of the model navigator and text editor
Why OOMEGA? Why not EMF?
OOMEGA’s front-end is based on the EclipseTM platform. However, the core is not based on EMF. You wonder why?
There’s a simple answer to this question: we believe that the meta-metamodel is a major source of innovation. That sounds weird in the first place , especially when you think of the OMGs initiative to
standardize the meta-metamodel in terms of the MOF and EMOF specifications. However, in my opinion, standards compliance is often, not always, a pseudo-guarantee and destroys a major source of innovation. We want to provide the best and most innovative solution to language engineers and therefore maximum flexibility is needed to design the core of the system. Consequently we introduced the metamodelling language M2L, which is powerful, compact, self-describing, and based
on a scientific foundation: the Edge Algebra.
Compatibility with third-party tools
Though we do not make a compromise when designing the system’s core AND its meta-metamodel, OOMEGA is an open system and interacts with outstanding tools of the MDE community.
- OOMEGA’s front-end is embedded in the EclipseTM IDE.
- The ATLAS Transformation Language (ATL) can be used to do Model-to-Model transformations.
- Model-to-Text transformations can be accomplished with the Xpand template language (formerly provided by openArchitectureWare).
- Model repositories are realized on top of leading object databases. Besides we provide a mapping to relational databases via HibernateTM.
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
Fantastic call in keeping the meta(^2)modelling something you can scope yourself !
What are the implications of using Edge Algebra (I admit I’m woefully ill-equipped TO comment ON cstheory) ?
I am currently responsible FOR the implementation OF a requirements engineering tool that based IN Eclipse (EMF+GMF+ATL). I may be trying OOMEGA IN a near future, since it seems TO me a very interesting workbench.
I still have a doudbt. The post comments ON this feature:
“You can edit your models with text-, graphical- and form-based editors in parallel.”
I just had a look AT the OOMEGA webpage. I understand that, after designing the metamodel, the DSL developer can CREATE a textual syntax specification LANGUAGE. However, I could NOT find anything about graphical modelling.
How IS this feature supported?
Thanks.
Cheers!
Sergio
Thanks for your post, Sergio! Good to hear that in your opinion OOMEGA seems to be a very interesting workbench. 🙂
To your question regarding the feature “You can edit your models with text-, graphical- and form-based editors in parallel”: in fact, one or more textual syntaxes can be declaratively (!) defined on top of an already specified metamodel. Thats all you need to do in order to get an Eclipse text editor with multiple tabs. The purpose of the tabs at the bottom of the editor is to switch between different concrete syntaxes. So you can view (or edit) your model from different perspectives. However, an editors tab does not necessarily contain a text editor. You can add other tabs to the editor that contain form-based or graphical editors. Yes, we do not yet provide a declarative syntax specification language for form-based or graphical syntaxes. So, actually, you need to implement e.g. a GEF-based editor and plug it into OOMEGAs editor. Its easy to do so and weve done that several times. A short explanation how to do that is available in our wiki at http://wiki.oomega.net/display/DOC/Develop+and+plug-in+a+Graphical+Model+Editor.
Please note: due to the incremental parsing feature of our text editor, its possible to combine text and graphical editors without any problems. Whenever you make some changes in e.g. the text editor, those changes will be immediately propagated to your graphical editor and vice versa. That works fine because existing objects are not re-instantiated by the parser rather they are updated by the parser and your graphical editors implementation can rely on OOMEGAs implementation of the observer pattern. By the way, that works also over the network: so you type some text and I will notice a new box appearing in my graphical editor!
I hope that helps.
Cheers
Christian
Yes, Christian, that helps!
The wiki page illustrates the feature.
Thanks,
Sergio
this IS always one OF the most difficult trade-offs!!
I guess the point would be TO know what kind OF properties they can verify (OR ensure by the way the metamodel IS constructed) thanks TO this underlying formalisme
Thanks FOR your interest IN the Edge Algebra, Justin AND Jordi! Ill try TO give SOME background information:
Weve introduced a formal foundation FOR metamodelling the Edge Algebra. The Edge Algebras role FOR the metamodelling LANGUAGE M2L IS similar TO the Relational Algebras role FOR SQL: the Relational Algebra operates ON relations; the Edge Algebra operates ON edge functions.
A short note ON the terms “edge” AND “edge function”: an edge links two nodes IN a graph, an edge function represents a SET OF those edges. AS a result, the Edge Algebra operates ON graphs IN particular M-graphs.
We were able TO formally define the metamodelling LANGUAGE M2L utilising the Edge Algebras foundation. Think OF models IN terms OF graphs, AND think OF metamodels IN terms OF CONSTRAINTS ON those graphs. M2L IS an easy TO use metamodelling LANGUAGE: you define CONSTRAINTS ON models (i.e. graphs) by USING LANGUAGE elements LIKE concept, attribute, association, composition, AND SOME other MORE sophisticated terms. Thus, your metamodel imposes CONSTRAINTS ON graphs NOT ALL models ARE valid AND those CONSTRAINTS introduced by your M2L metamodel can be formally reduced TO Edge Algebra statements.
M2L defines commonly known metamodelling constructs LIKE attributes AND compositions, but we were able TO define MORE complex, but extremely useful concepts LIKE namespaces, canonical keys, instantiation, etc. IN a precise way. Additionally we ARE now able TO formally discuss various aspects OF the metamodelling DOMAIN, e.g. query optimisation, contradictory LANGUAGE definitions, AND complex CONSTRAINTS.
Finally one LAST note: you can regard the Edge Algebra AS a formal foundation FOR object-oriented query languages (ODBMS), too. This IS, IN fact, another great feature OF OOMEGA.
Yours
Christian
This is a comment only on the remark made in the post about EMF and MOF standardization. It is not a comment on OOMEGA, per se, which does seem to be quite an interesting product.
Even as someone who has worked on a number of OMG standards, I have to honestly admit that the basic purpose of a standard is, in fact, to limit innovation. However, a good standard must have the balancing benefit of encouraging innovation in other areas.
EMF is an interesting case in point. For Eclipse-based modeling tools, adopting EMF allows one to leverage a lot of existing technology already based on it, and promotes interoperability with other EMF-based tooling. If your goal is not to innovate in metamodeling — that is, if EMF is sufficient for your needs — then adopting EMF can focus and enhance your ability to innovate in producing products based on EMF.
If, on the other hand, EMF is not sufficient to support the innovation you want to do, then you need to move on to something else. However, you do lose the complimentary benefits that you would get by using EMF, so there is a trade-off. Innovation for its own sake does not always provide the best pragmatic solution for all users.
Note, however, that EMF (ECore, really) does not actually even conform fully to the OMG MOF standard! This means that, while it promotes the interoperability of Eclipse based tools, it is a special case for non-Eclipse modeling tools that, for example, base interchange on XMI that conforms to the actual UML MOF metamodel. There were technical reasons for this lack of conformance originally, but now it is basic just an unfortunate annoyance limiting wider tool interoperability (especially as the requirements for XMI interchange have become increasingly clarified). This is a case where the lack of standards conformance is starting to have a noticable negative impact on the UML modeling community, at least.
Finally, saying that OMG has an “initiative to standardize the meta-metamodel in terms of the MOF and EMOF specifications” is not quite right, or at least it is misleading. Technically, OMG has already standardized MOF for metamodeling — OMG is, after all, fundamentally a consortium whose purpose is to produce standards. But standardizing MOF for metamodeling within OMG was not intended to stifle innovation in the wider community but, rather, to enable the membership of OMG to better innovate in the development of OMG modeling language standards, in the same way the EMF was intended to do this for modeling language implementation within Eclipse.
OMG does not force anyone to use their standards. In the end, any standard is only widely used if the benefit to innovation is worth the cost.
Ed, thanks a lot for your post. I carefully read your comment several times: I think there’s a lot OF valuable information IN it.
First OF ALL, it’s nice to hear that you think that OOMEGA does seem to be quite an interesting product. Thanks for your positive vote.
Though I’m pretty sure that you did NOT misinterpret my original post, I want TO clarify one important issue: it was definitely NOT my intention TO call standard committees IN question. Actually I’m sympathetic to the OMG and I do like standards like e.g. UML. Furthermore, I’m sorry FOR my misleading statement IN regard TO the point OF TIME OF MOF specifications: Yes, OMG has already standardized MOF FOR metamodelling.
It’s interesting that you say that the basic purpose of a standard is, in fact, to limit innovation while a good standard should encourage innovation in other areas. Basically you conclude that the benefit of a standard is compatibility and interoperability between tools from different vendors or communities at the cost of limiting innovation in certain areas. Related to OOMEGA we’ve got the following situation:
In SUM, we try TO GET the best OF BOTH worlds: innovation AND interoperability.
Finally, it’s interesting that EMF (ECore, really) does not actually even conform fully to the OMG MOF standard. Thus, as long as you stick to EMF-based tooling, you’re fine. But AS soon AS you leave EMF-based products you might run INTO problems. So the original goal OF vendor independence IS ONLY partially achieved. That’s exactly what I’ve often observed IN different areas: a software product seems TO be standard compliant, but if you really want TO move ON TO SOME other product you still have the barrier OF vendor incompatibility.
Christian —
I don’t think we really have ANY fundamental disagreement. However, I do think it IS good TO have an explicit discussion about this point, because it IS an important one FOR model-based tooling such AS OOMEGA.
It IS also worth noting that EMF AND MOF ARE really somewhat different IN kind. EMF IS a modeling tool implementation framework that happens TO be based ON the ECore FOR metamodeling. MOF, ON the other hand, IS a metamodeling standard whose purposes ARE ONLY TO provide a basis FOR specifying other OMG modeling languages (such AS UML AND BPMN) AND TO promote interoperability (via model interchange OR via model repositories) BETWEEN modeling tools IN these languages.
OMG does NOT produce implementations, it ONLY produces interoperability AND interchange standards. A modeling tool can conform TO an OMG modeling standard without USING anything implementing MOF internally. Indeed, many UML tools ARE based ON EMF which, AS I noted, does NOT have a MOF-conformant meta-metamodel.
The KEY OMG standard FOR model interchange IS, OF course, XMI, which IS based ON MOF. Unfortunately, there have been a NUMBER OF issues WITH the clarity OF the MOF, XMI AND UML specifications which have led TO incompatible implementations OF XMI export AND import. AS you noted, this defeats the goal OF vendor independence.
Fortunately, the Model Interchange Working GROUP AT OMG has recently been working WITH vendors TO clarify these specifications AND provide a suite OF XMI interchange tests TO promote interoperability (see http://www.omgwiki.org/model-interchange/doku.php). Soon, I think you will start seeing EMF XMI AND other vendor implementations OF XMI ALL converging TO TRUE interoperable, standard conformance, WITH even a test suite TO validate such conformance.
So we will THEN finally AT least have a REAL basis FOR vendor independent interchange OF models AT a syntactic LEVEL. AND this IS the minimum basis required TO eventually achieve TRUE semantic interchange. But that IS another topic!