As part of my MDE course, I devoted four sessions (of around 3h each) to a code-generation exercise. The goal was to model a simple CRUD-based web application and then use a professional model-driven development tool to automatically generate and deploy the application without writing a single line of code.
I thought this was the perfect example to show them the wonders of code-generation and, even more, use this as a bait to turn them into strong advocates of model-driven engineering in general. Well, based on the responses to a short survey I asked them to answer at the end of the exercise, I failed. In fact, I also failed last year, with a different tool. So it may be me that I’m not good as a teacher (luckily for my students and myself, my teaching days with undergrads are over) or that MDD tools in general are not yet good enough. Probably a combination of the two.
Anyway, let’s take a closer look at what the (29) students said after their first experience with a code-generation tool. But before some things to consider before jumping to fast to conclusions:
- Note that the survey was mandatory and not anonymous. I emphasized several times that the only thing I’d evaluate from the survey was that their reasons for (dis)liking the tool were well-argumented but, still, some students may have chosen to give a more positive review than what they really thought
- This scenario (generation of a simple CRUD web application) should be the abc of code-generators since there was no complex dynamic behavior that would require any kind of behavioural model (state machines, pre/postconditions,…) so it was, IMHO, the best one to illustrate the benefits of MDD.
- On the “cons” side, it’s true that students are not professional developers having to build the same kind of applications again and again. We already know that MDD pays off in the mid-term so they lack that long-term perspective when evaluating the experience
Q1 – How would you mark the experience of using the code-generation tool?
(5 – Very satisfactory, 1 – Totally unacceptable)
Among the most cited (negative) reasons we had: lots of installation problems (I had to let them work in pairs so make easier for them to have access to at least one machine were the tool was working smoothly), lack of optimization of the deployed application (quite noticeable for a small application like the one we were doing, probably more reasonable when we move to more complex ones), sudden crashes and corrupted projects, good for prototyping but unsure if the if the method scales, lack of documentation and difficult to customize the code.
There were also some (but fewer) positive comments like: I think this is the tendency of the future. but we always learn more from the cricisms 🙂
Q2 – If you were working in a software company, how likely is that you would choose to use some kind of code-generation tool in your next web development project ?
(5 – Totally sure , 1 – No way I’m doing that)
Here a few mention a MDD tool could be used to generate the back-end part in web development project
and It can be a great help for database management or a quick and dirty generation for a prototype but that for the front-end part, I think it is a waste of time specially because they had the feeling they would need to end up modifying lots of the generated code in the end since since this kind of tool will never be as configurable than a HTML/JS project .
A couple of students also mentioned but for the kind of scenario they would find the tool useful (this back-end admin generator), a tool like Telosys (or for what matters, most programming frameworks nowadays) that can generate a simple scaffolding interface from only a database definition could be simpler to use. Clearly, MDD tools need to find the sweet spot between language expressivity and tool simplicity here.
Q3 – In your opinion, what would make MDD tools more useful and attractive to programmers?
This open question gave some good suggestions of some new features for this kind of tools, like:
- Being able to build your pages by drag and drop
- Ability to change the generated code in a maner that would affect the model
- Some common pattern already implemented by default (like login, CRUD tasks, etc…)
- Easier to build multilanguage applications
plus some requests for non-functional aspects like documentation, better compatibility,… (result of the problems reported in the first question)
Summary
I think this comment from a student is the perfect summary of the situation:
The concept of generating code seems good in itself. However, I had so many problems with the tool I didn’t even think I was saving time. I also had the feeling the tool was lacking a lot of flexibility to really get the result you want/need.
that is, code-generation is not really the problem, the problem is that (after so many years) we haven’t been able to build usable (usable for an end-user company not to be only useful when used in-house by expert consultants) to produce software for tools that truly deliver on the promises we make.
As a side note, several students angrily complained about a remark I did in class on code-generation tools replacing programmers in the development of many kinds of business applications. Regardless what you think about code-generation, we must be doing something wrong if our students see themselves as mainly future programmers (and not engineers, architects, …).
I’ve to say that this is my second failure with code-generation. Before I also failed trying to sell online code-generation services so maybe it’s just that I’m not the man for the job 🙂 .
Update: You may want to also take a look at the reddit discussion on the topic
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
I am also facing the same kind of challenges when trying to convince experienced/senior devs about MDD. They are just viewing it as “just another mean” for generating code that they will have, sooner or later, to manage manually. I am struggling at making clear the value of raising the level of abstraction and, as you said in you conclusion, promoting them to being architects or engineers (and not devs)
The problem with most code generation tools that I’ve come across in 25 years of software development is that the level of abstraction required to make them function often makes the code they generate sub-optimal. So as a developer, I have continuously found myself writing code to generate that optimal (or more to the point, scalable) code.
History has shown us that the argument of developers being replaced by code generators while perhaps true in some very specific circumstances doesn’t actually ring true in the real world.
Every time we have a new RAD tool, templating system or other system that has promised to make software developers obsolete, it has instead spawned a whole industry of people to support it – Visual Basic, Microsoft Access, Frontpage, Dreamweaver, Sharepoint, Blogger, WordPress, Prestashop, Magento etc.
So far from replacing the developers, it merely moves their focus in a different direction – either that of maintaining the code the generator creates, maintaining the templates the generator uses to create the end product, or they move onto developing more specialized systems.
As long as there is software to be developed, there will be a need for developers to maintain it. There are always companies who want/need custom functionality.
This is the same argument I heard when moving from Assembly programming to C. It was true at first, and then when the compilers became good enough, it was just fear of new technology.
And now what do you program in? You didn’t stop being a developer because C came along and stomped all over Assembly, did you? Nope… you learned C and changed your approach. There are still Cobol developers making a metric ton of cash… probably making more for their Cobol work than they ever did when it was in its heyday.
Far from being fear, this has always proven (so far) to be the case. When the tools *do* eventually get good enough, the whole industry shifts its focus to something else and development largely carries on, leaving specialists in the forgotten arts along with the automated tools to work on problems that have already been solved – the boring problems. Until machines can actually out-think humans, we will have humans that are paid to think, and when the machines can finally out-think us, we’ll pay people to stop them destroying us… until then, I see little point in worrying about automation putting us all out of jobs.
I completely understand your senior devs.
If MDD works in an area the you have stopped solving the interesting problems and are just creating anonymous off-the-shelf solutions.
But that’s just my experience regarding MDD.
Depending upon the specific technology, the models can be the code. IOW, you’re going to be able to “code” just as interesting solutions as in 3GL; you’re just going to be more productive due to a higher abstraction level.
Most young developers today never had access to MDD tools, so they learned 3GL programming. This makes convincing them the same as with senior developers.
Most job postings aren’t asking for MDD, so the desire to learn is affected by future concerns. This is where method triumphs over practical. You have to show a reason for good separation of concerns and abstraction that will apply to even 3GL programming.
I think the job postings are indeed a big problem. They know that by becoming experts on , for instance, AngularJS they will easily find the job. We can´t say the same about MDD expertise
Worthwhile comments from the students; I was frankly expected more negative results just because people like the freedom of programming. But the openness to the idea of MDD was refreshing — but as the author has pointed out, the tools have a ways to go. Part of the issue is that producing a code generator that supports “round-tripping” (i.e. the end results can be tweaked in code, and that code can be accepted, without breaking the original model, back into the repository) is really complex. This is also the challenge in the business process space in which I find myself (on the evangelism end of things). Regarding “patterns”, consider that “business process” and “decisioning” and “ontologies” are meta-patterns, for which there are specialized tools. Using such tools (or technologies) in concert together goes part of the ways to enabling richer application development.
Thank you for sharing the study and its results. Even more I would like to thank for teaching the topic as unfortunately most students in computer science never get experience on MDD at all – models being sketches for supporting only communication and understanding. This is in sharp contrast to some other fields like control engineering in which various block diagrams, data flows etc. are used to model and generate the code. Also the related tools, often commercial like Labview, Simulink/Mathlab, come with complete examples, guidelines, tutorials providing proper user experience. Which technologies you applied (for modeling, generators, target language for the CRUD-app?)
@John H Morris: It is not good practice to modify the generated code. Better is to modify the generator, keep generated and manually written code separate, extend the modeling language to cover those parts that need “tweaking” etc.
Question: Would users in the fields you mention be capable of writing the generated code themselves? If not, then modeling is the only way they have to build things, while for programmers, it´s an alternative.
Good question, and without knowing the answer I could expect that some would be able to write the code too and others would not. I’m not sure if the question is relevant at the end. For example, almost everyone can manually cut trees yet we use machines to do that. To follow the analogy, students of forestry naturally need to understand and have practice on these too but it does not mean that they continue doing that in their career. Similarly, when I was a student, we had mandatory course on assembler, but that did not mean that we all are expected to program in assembly in our professional career.
Finally, if machines don’t work properly or are hard to use, maintain etc. then they are not naturally used much. BTW: which tools you were using in those two cases?
If you raise the level of abstraction, round-tripping is impossible by definition. There are machine code sequences that cannot be rendered in assembly language, and assembly language sequences that cannot be rendered in C. Similarly there is Java code that cannot be rendered in UML. (Inlining the lower level in the higher-level code doesn’t count, as that doesn’t raise the level of abstraction.)
Those were all general purpose languages; with domain-specific languages and domain-specific generators, this becomes even more obviously true. Let’s assume Jordi’s course involved a generator that takes a UML Class and produces an SQL table and Java CRUD code. That works fine in that direction, but the vast majority of hand-edits to the Java code will not be able to be represented back into the UML diagram (at least not in a way that the generator will recognize and be able to reproduce the hand-edited code from).
Although some fiddling around with inlining and/or using both the model and the edited code as sources for generation may allow us to do a demo that makes it look like round-tripping is possible, in the general case it can’t work – and has never worked. If we have to be responsible for both the 3GL code and the model, we’ve increased rather than decreased our workload (particularly since maintenance forms the main effort of software development).
Only when the generated code can be ignored have we truly raised the level of abstraction. That happened when we moved from assembly language to C, and has been seen to work when moving from code to a domain-specific modeling language. For success, the domain-specific language must be good, and the tools must be reliable. Sadly most attempts, like the one above, fail because the language isn’t domain-specific or good, and the tools are poor. If you choose the wrong language and the wrong tool, the results will tell you about them or their applicability here rather than about MDD or code generation in general. As an analogy: try building the same CRUD application in ML or Perl with a broken, undocumented interpreter – you can’t conclude programming is bad from that.
> If you raise the level of abstraction, round-tripping is impossible by definition.
Totally agree!
Sorry but, “ There are machine code sequences that cannot be rendered in assembly language, and assembly language sequences that cannot be rendered in C.” is manifestly incorrect. You’ve WRITTEN it in C, COMPILED into assembly, and LINKED into machine code.
The reason you feel you cannot reverse engineer that material is due to the many optimizations that are performed on the Written C by the compiler generating the Assembly. Optimizing Assembly to machine code. All this is hidden by the tooling, so what you put in ( C Source ) is difficult to relate to what you gets out ( Executable ). Not only hidden but the changes are lost, if at all recorded.
“manifestly incorrect” and all caps seems a little excessive… The point is that I’ve not written it in C, I’ve not ‘compiled’ it into assembly, and I certainly haven’t ‘linked’ it into machine code. That’s the normal “downstream” direction, but in round-tripping we’re talking about also wanting the “upstream” direction: disassemble machine code into assembly language, and decompile assembly into C. And with hand-editing at the lower-level, there is no guarantee that there will exist a higher-level construct to correspond to everything you can do on the lower level.
I can write unusual machine code, such that the disassembler doesn’t know what to do with it (other than bail out by treating it as a data segment or somesuch). E.g. with DDDD… in Z80 , the first DD is simply ignored. A disassembler would just produce whatever IX operation was specified starting with the second DD, and round-trip assembling that back would lose the first DD. That loss might be important: I’ve written machine code that timed the T-states for each operation, to ensure the right length of time in a certain loop to be able to bit-bang MIDI commands at the correct 31250 baud rate (bps). And got paid for it too, so this isn’t just some esoteric special case – even if it’s not the kind of thing you’re likely to have met these days.
Many times when we now talk about MDD we’re talking about a completely different language which somebody has to learn and internalize before starting to become productive. It’s a strange environment, a strange language with strange rules that’s less well-tested and that will cause weird quirks in its own compiler and the final compiler before we get any results – I can understand the aversion.
Why do we keep focusing on external language MDD? In-language MDD is also a very useful tool and comes naturally from properly applying OO design – having the atoms defined in a better way allows you to write your code much more concisely and clearly. How about giving them a project that has the first, second and third layer of atoms (first == STL, second is your simple logical wrapper classes, third is the functional things around it – but note the layers aren’t quite fixed) defined such that they can use it? That makes creating an application with them as complicated as clicking together Legos – and they know how to do that.
I view MDD as an abstraction level, not a wrapping of 3GL code. The idea is that your models are compiled into some less abstract code via an optimizing compiler. Can you write assembly that’s more efficient than the compiled C code? Sometimes. The same relationship applies between the model and the compiled output.
The reason we raise the abstraction level is to enable better reuse, better quality, and less time spent coding. Sometimes you have to fall back on a lower level of abstraction, but for the most part, falling back is undesirable.
It’s just Gaussian: for some reason, most do not like to raise their level of abstraction, and that’s what MDD is all about. Next time, may be you can start early in the semester, ask them to produce 16 CRUDLs (L is for “Link”), with a DRY mandate, and the warning that by the end of the semester, you will change 30% of the object network and attributes, and they’ll have one week to redo.
+1 and to welcome students to even more realistic world the change they need to make is not to objects and attributes they originally defined.
+2 to both above!
It is often too expensive to raise the abstraction level of ALL software artefacts to the same abstraction level, to be forward-engineered into implementations through the same MDA stack. It has been done a few exceptional times, but it’s not economical for every project.
Therefore, raising the level of abstraction has to be supported with different approaches in different parts of the software.
We have quite a powerful toolkit and set of recipies from where to choose.
My general approach is to profit from business responsibility and/or technonogy layer boundaries to partition, and then choose the most appropriate tool to MDA that part.
No this does not require a full MOF repository and stack, the intent is precisely not to be FORCED to have everything under the same unified roof,
The apparent problems in round-trip and code-gen have many solutions.
One is to abandon round-trip altogether, and separate generated artefacts from handwritten artefacts at responsibility or layerboundaries.
Other is to have both meta and code in the same place with annotations.
Some parts of the software may offer the maximum profits when investing in a full-fledged MDA approach with MOF/CIM/PIM/PSM/framework, usually when the business requires maximum predictability in software following changing business requirements in time and with quality.
Code IS modeling. Why would I ever want to replace that with code-generation tools except maybe the beginning of a project? Its like breaking my own legs so that I can use crutches.
Curious: are you then also against programming frameworks that generate part of the code (e.g. scaffolding) for you?
I’d as just reply “models are code”, or a longer “some model representations are code”.
I’d replace hand-coding with tooling anytime,
because even when doing an excellent job at component | OO | functional | design, enbrace DRY, TDD, …,
by the time you apply to the really semantic code, all the organizational-driven and technology-driven Aspects (i.e. security, audit, defensive), bind to middleware and UI,
you end up with a mass of code of which usually only the 10% to 50% is really instersting to the business domain.
It takes an awfull lot of effort to drive business changes by hand all the way to deployment, so any tooling, including relevant MDA, is more than welcome.
Numbers taken from real projects show that it’s 1-3% ! While I have just a small data set at hand, I am much convinced that these numbers apply to the vast majority, really. And even these 1-3% still contain stuff that I personally consider low-level-technical, but it’s hard to factor that out.
We have almost 15 years of experience in a UML based model driven approach for a big JEE application. The resumee, it seems is:
One one hand, code generation saved our butts when doing implementation redesign (for whatever reason) more than once.
On the other hand, we removed many artefacts formerly generated from the model and kept them as code artefacts, or removed code formerly generated from templates into our framework API. The result is cleaner generated code, models focussing only on domain elements, but also ‘model-driven’ being only part of the overall development process. The manual written code gained more and more importance again.
We worked also with behavioural models, like ETL processes, graphical Flow modeling a.s.o.
Reading the article and the comments, I got the impression that the debate lacks separation of aspects, that makes arguing very difficult. So let’s try:
1.) “Modeling” vs. “Graphical modeling”. While the former is good, the latter is often a big barrier for efficiency in development. The graphical tools are bulky, lack often some features like (smart) versioning, merging, collaborating, good integration into the overall development tool chain. The development “flow” is often disturbed by layout problems. Working with mouse, property dialogs, popup windows never gains the speed of a well configured source IDE.
Graphical diagrams of a certain size don’t contribute to gaining overview but are more of a hindrance. (While I prefer graphical reporting, like with ObjectAid, I loathe graphical construction).
2.) Modeling vs. Coding (i.e.: generate everything). One of our biggest mistakes was the imagination that it would help to generate GUI code from a UML model, with a bit of layouting in a subsequent design step. Leading to the necessity to create a proprietary ViewDesigner tool. The toolchain and development process surely never attained the maturity and efficiency of common GUI designers.
3.) Static vs. behavioural models. While the advantages of graphical diagrams for static aspects -like classes and their relationships- are reasonable to some degree, graphical development and presentation of behaviour (data flow, control flow with conditional n-way branches, error handling a.s.o) is often tedious, especially when a certain level of complexity is exceeded.
4.) Codegeneration. While code generation is definitely a Good Thing (TM), there are many ways to it beside “model driven”. Code generation is also not necessarly done on “source code” level, so we have to put indeed all the following into consideration: Bytecode instrumentation, class generation from declarations (WSDL, XSD,…), IDE Templates or shortcuts, build tools (scaffolding, profile based prototyping, …), GUI designer.
Each of these help to gain productivity, especially when used in combination.
But each are also easily accessible and applicable, what often is not the case for tools claiming to be ‘model driven’.
5.) Abstraction. While abstraction is a Good Thing (TM) too, it is not necessarily gained by “modeling, and modeling everything”. Abstraction is yet achieved by good API design of libraries and frameworks, which in the end can even lead to internal DSLs of quite some productive quality. No “code generation” involved.
Abstraction is indeed a main topic of contemporary rediscovery and inforcement of Functional Programming instead of or even beside the classical OO. Again no “code generation” involved, but requires a change in thinking and approaching problems (->paradigm). This is more and more supported by modern languages, which include more and more abstractions or allow to shape such.
In the end the way from model to code is isomorphic to the way from code to binary artefact. It only puts another level and production step (and thus tooling) into the calculation, but the efficiency (time to market) and effectiveness (quality of results) *must* always be proven, and your survey results show an intuitive impression that those benefits are often missed, what is btw mirrored by our experience.
The most effective tools here are in almost all aspects undistinguishable from compiled programming languages and thus already pass the border from what I would still consider being ‘model driven’ into declarative progamming land.
This is where differences in methods and tools make this discussion harder. Your 15 years experience using a “UML based model driven approach” don’t match my 15 years experience using a “UML based model driven approach”.
1) Depends on tool maturity. Some aspects were also my experience 15 years ago, but haven’t been a factor within the last 7 years.
3 & 4) Do not match my experiences.
5) Abstraction was not obtained through layering in my usage. I was compiling a higher level language into 3GL, not generating frameworks or wrappers.
I remarked to Jordi that the conversation on Reddit was mostly centered on specific experiences with an unconstrained definition of “code generation”, and he pointed out that we were probably also stumbling on an unconstrained definition of “model”. Standard software development conversation problems. 😉
Your description looks like you really have profound practical experience, any many, many issues match our own findings. Just one note: we share your experiences with regard to GUIs, too, but we discovered an approach/abstraction which also improves the situation and allows applying MDx with benefit here. If you’re curious, mail me.
What about the elephant in the room?
Think about your preferred software application…
Now imagine “Lots of installation problems” and “Sudden crashes and corrupted projects”… Would you still love it? Actually, would you still use it?
MDD depends on trust: things that you used to control are now “taken care” by the tool, and you cannot trust a platform with that low level of quality.
I am sure the results would be quite different with a MDD tool that actually works!
While that is much true, there is another, really, really huge dinosaur next to the elephant:
People often compare New-MDx-Tool-XYZ with Your-Favorite-Mature-IDE. Even the MDX-Tool-promoter (vendor, creator…) do the same, they say, hey, look, we can generate a simple CRUD-app with 3 classes, isn’t that cool? (no, it’s not) So what’s wrong with the comparison? Isn’t MDx just lame?
What I mean is, since MDx is about abstraction, it does not make any sense to try to solve a low-level problem that has already a good solution. Here abstraction is absolutely un-necessary. MDx, and it’s tools, make sense in bigger settings, running longer, being more complex. And not be degrees, but my magnitudes.
To me, such a tool comparison is like asking a home-baker to create his next single cake in a baking factory. Sure, and very rightful, he would refuse to do so, and prefer his home-mixer and oven over the factory’s control room with the many knobs and warning lights.
Actually, yes, at the moment MDx tools ARE less convenient and mature than contemporary IDEs, but their use cases is quite different, too.
Dear Jordi,
I am convinced that you are a great teacher, so this cannot explain the result.
Congrats for your poll, that gives a factual analysis on this issue that is key for the Model acceptance issue.
To me there are three points:
– The tool issue : the tool must be flexible enough, and maintain smoothly a code-model consistency
– The way to work and the subject: GUI is probably one domain where an MDA approach is difficult to set. MDA will be interesting if you have a large set of GUI artifact to produce, with a well-defined set of rules to apply. At Softeam/Modeliosoft, we promote a pragmatic approach, where depending on the benefits, parts of the development will be model driven, and parts will be round-trip driven. Some parts are architecture driven with regular patterns, some parts are ad hoc and more code centric. Most MDA failure stem from a dogmatic view as opposed to a pragmatic approach.
– As you point out, the biggest benefits come on maintenance issues, large development and rules to follow, fighting against architecture decay, knowledge management when there is team turnover (that is always for large projects, or for maintenance management). That cannot be measured by your students.
Four whitepapers to enlight this debate downloadable on https://www.modeliosoft.com/en/resources/white-papers.html.
>>> Improve your Java development efficiency with Modelio and UML
This white paper will walk through typical Java modeling use cases and describe how Modelio can be used to model Java architecture. It will demonstrate one of Modelio’s most useful features: its ability to automatically maintain consistency between the code and the model, so that any changes made to the code will automatically update the model and vice-versa.
>>> Improving existing Java code with a UML modeling environment
This white paper will show how the Modelio modeling environment can enable you to improve existing code, enhance its documentation, and assist in the understanding of the architecture of a Java application. These services constitute a first level of assistance, support and automation, which allow you to go even further with more elaborate use cases, such as the modernization of an application, the reverse documentation and reverse design of an existing application, the analysis of existing application architecture, and so on.
>>> Case study: Achieving better software quality and 30% productivity gains using model-based developmen
DCNS has developed an internal information system development process that combines a component-oriented approach with the implementation of the UML and MDA technologies using Modelio, and the use of aspect-oriented development environments (AOP). Productivity gains of 30% are expected on very large systems with strong quality constraints.
>>> Organizing a large model-driven software development project – Case study
Activities such as configuration and version management, integration, validation and team organization and cooperation are all too often neglected by development teams, but remain crucial if a project is to succeed. This white paper is a reminder that numerous tools exist to help assist and automate these activities, notably in the world of open-source applications. An example illustrates how these tools can be used in conjunction with a modeling tool (Modelio), in order to obtain successful model-driven development.
Hello,
I fully agree that benefits of MDE can be seen on large-scale projects and especially in the maintenance phase.
We run some quantitative studies on industrial cases in the past that demonstrate that too.
See for instance this paper:
Acerbis, Bongio, Brambilla, Tisi, Ceri, Tosetti. Developing eBusiness Solutions with a Model Driven Approach: The Case of Acer EMEA. ICWE 2007, pages 539-544.
http://link.springer.com/chapter/10.1007%2F978-3-540-73597-7_51
or this (more recent) one studying size of projects and activities of people in MDE industrial settings:
Brambilla, Fraternali. Large-scale Model-Driven Engineering of web user interaction: The WebML and WebRatio experience. Science of Computer Programming
Volume 89, Part B, 1 September 2014, Pages 71–87
http://www.sciencedirect.com/science/article/pii/S0167642313000701
Who needs roundtripping? Does anyone complian about the lack of roundtrip abilities between ASM and SQL? No? Why not? That’s strange because it seems to be an absolutely remarkable problem, something that blocks people from using SQL at all – at least according to what I read here. If I can’t roundtrip from SQL to ASM and back, I cannot use that tool. I will therefore continue to code my database queries in hand-optimized assembler. And yes, my database is also handwritten assembler. Some of the most performant bits are those I toggled carefully by hand from 0 to 1.
As a wannabe model driven developer, I have a hard time finding sufficiently complex examples or exercises. One idea is to start with just one “business-level” problem domain, produce a functional program and add more aspects. Show how to model other domains and weave them in one at a time w/o changing the original model. (The Balcer-Mellor book does have a complete example but all the bridging is explicit and cross cutting domains like security are not addressed.) I think a progressive exercise would make a compelling experience and teach some valuable skills.
I’ve seen some functional programming blog articles that take this approach. The examples are compelling and I get some usable ideas. MDD examples can be compelling but the ideas are not readily usable.
Thanks for sharing the idea and the results. Teaching MDSE as a graduate course, I have encountered the same situations and questions. However, I think the results are not as disappointing as the title of this post says! BTW, there exist some surveys on the acceptance of MDE in industry, eg, Paper “Empirical Assessment of MDE in Industry” by Hutchnison et al., or “Applying model-driven engineering in small software enterprises” by Cuadrado et al.
We also run some quantitative studies on industrial cases in the past that demonstrate that too.
See for instance this paper:
Acerbis, Bongio, Brambilla, Tisi, Ceri, Tosetti. Developing eBusiness Solutions with a Model Driven Approach: The Case of Acer EMEA. ICWE 2007, pages 539-544.
http://link.springer.com/chapter/10.1007%2F978-3-540-73597-7_51
or this (more recent) one studying size of projects and activities of people in MDE industrial settings:
Brambilla, Fraternali. Large-scale Model-Driven Engineering of web user interaction: The WebML and WebRatio experience. Science of Computer Programming
Volume 89, Part B, 1 September 2014, Pages 71–87
http://www.sciencedirect.com/science/article/pii/S0167642313000701
Jordi,
thank you much for sharing your poll and raising that interesting question!
Reading the results and the comments, my impression is, that it boils down to the question “Which abstractions make sense under which cicrumstances?” (and then, but secondarily, the matter of benefits vs. costs, i.e. tool support, efficiency gains etc.)
With one exception here (the comment from “Det” above), presently used abstractions (in next to all MDx projects I have seen or heard of) seem to fall into two categories:
1.) “Draw your 3GL Code”, or, to be a bit more fair “Draw 3GL-level Classes”, which boils down to “1 Model Class –mappedTo–> 2-3 Code Classes (POJO, UI…)”
2.) “Draw the customers’ wishes” (and then put all the teams wisdom into a transformation which transforms the wishes into a running app, the DSL-approach)
While (1) was a good starting point around 1990, it has been shown since then that it fails a bit short (you name the problems).
(2) is definitely an improvement, but keeps restricted to the project scope at hand, and specifically, it does not address the abstraction problems and the work inbetween; actually, to me, DSLs are more like a systematic approach to capture requirements.
It also seems to be commonly agreed upon, that user interfaces are specifically hard to model.
So, I’d like to ask: why don’t we address the big design mess inbetween? Aren’t there really no better ways to model that stuff than we do? Can’t we really not do any better than “modelling” user interfaces in Javascript? Or logging with AOP? Or application assembly with beans.xml?
Many application now are built by wiring up services together like OAuth, MongoDB storage, social API’s and cloud API’s. Is code generation still really a thing when so much activity has moved to integration, then adding a fashionable web skin for magnetization?
Cliveb: That’s an interesting question, whose answer is maybe counterintuitive. We can compare the current situation with the original one, where you wrote the app and its whole stack from scratch – authorization, storage, integration etc. You would almost certainly write the stack elements to be reusable to at least some degree, yet there would be very little reuse within a given stack element. There was thus little scope for code generation of the stack elements themselves. A reasonable portion of the code of the app itself, on top of the stack elements, could be generated. This would be hampered by the early stage of evolution of the stack elements – inconsistent interfaces etc. Of all the code you needed, only a small portion could thus be generated in theory. In practice, as you were building the stack as you went, you wouldn’t know the interfaces ahead of time, so you would be hard pressed to make a modeling language even for building the application with.
Nowadays, there are various stack elements and layers, whose code you can just reuse as-is. They are well understood and documented (at least better than in the previous case), so you have a good chance to create a modeling language and generator that makes use of them. They might be explicitly visible in the models, showing how and where you interface. Alternatively they could be hidden from the modeler, with links to them being made automatically by the generator. E.g. if you model a data element, you could also model details of how it should be stored in SQL; alternatively the generator could figure out something sensible for itself.
Overall, the current situation is much more amenable to code generation. For a given app, the number of lines of generated code will be slightly higher, because of the better documentation and domain understanding at the start. Given that you no longer have to write the stack, the percentage of generated lines / the total number of lines you write will be much greater than before.
@Steven Kelly, your counter-intuitively is well reasoned. I was also remiss thinking thru MDE for WOCA. Write once, compile anywhere is a huge productivity gain. This points at at low level language like Go lang used for the baseline, where MDE helps the most, cross compile Go for browsers and native devices. gopherjs / seven5