I think it’s time to revisit Niklaus Wirth famous equation:
Algorithms + Data Structures = Programs
from a MDE (didactic) perspective. I propose the following MDE equation:
Models + Transformations = Software
Of course, it’s a simplification of the reality (well, in the same way Wirth’s one was, in both cases we omit, for instance, the grammar/metamodels involved) but gives a very simple way to explain what MDE is to novices (as we pretend to do in our future book).
In fact, we can even go one step further (once novices have understood the basics!). Since transformations can be represented themselves as models, we could just write:
Models + (Transformation) Models = Software
which could be “conceptually” simplified as:
Models = Software
that cleary shows that MDE practitioners have the absolute truth to software development 🙂
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
I am not sure the equation is fully correct, and in it’s current form it might be used against MDE 😉
Models + Models = Software
2 x Models = Software
Models = 1/2 Software
(shameless disregard of the fact that Models is plural – but hey I call that artistic freedom ;-))
That’s exactly the discussion we were having on twitter with @christynic. That’s why I talk about a “conceptual” simplification. Indeed, if not we are just proving that you cannot generate 100% of the software wiht MDE!!
As she says: @softmodeling Models = software/C. Then all we have to define is C. Einstein did it 🙂
Of course if Einstein did it we can do it as well 🙂
Another way to look at it is that software is a model of some part of reality (e.g. A business process) expressed in a technical language. In which case the equation should read
Models + Transformations + Languages + Interpreters = Software + Interpreters
Which can be reduced again to your original equation
PS: Interpreters (or compilers) cannot be disregarded IMO since they capture the semantics of the Models (M, MM, T, …). Of course one could always argue that these are also models…
Looking forward to see the “final” equation !
Hi!
If interpreters/compilers were to “capture the semantics of the models” we could easily end up having the same (conceptual)model interpreted/compiled differently by different interpreters/compilers.
Do we want such a situation? I think we don’t.
Interpreters/compilers should abide by the semantics of the models (actually the semantics of the metamodels that the models which are instances of) they interpret/compile.
In this way, the level of abstraction (read platform-independence) provided by models will allow you to define a system regardless of the lower level (read platform-dependent) implementation details. Such details would be taken care of by each compiler depending on the targeted execution platform.
MDE_SW = 0.8 * transformation(model) + 0.2 * handcraft(non-modeledThings)
which is far more better than non-MDE_SW = 1 * handcraft(self) ;D
By the way, I think that Niklaus Wirth’s equation “Algorithms + Data Structures = Programs” is correct… But only for imaginary programs floating in the air!
I propose to extend the equation: “Algorithms + Data structures + Architecture = Software”, where:
– Software is a Program in a usable shape,
– Architecture stands both for Program skeleton: a link between all theses Algorithms and Data Structures, and Software skeleton: something installable and runnable, by example something like a file with a .jar extension!
Hi!
@jordi,
I think it’s a interesting topic. I have a question: why you didn’t use ‘programs’ but ‘software’?
@Vincent,
I think your equation is a realistic and empiricism version, but the essentials might not get rid of that, what are models, deeply?
(My recent post “Models: Execution or More” had some talks related this.)
I think the Wirth’s equation is a very exact answer about what programs are, it’s the intrinsic nature of programs.
And, IMHO
Architecture = structural factors(constructs, frameworks) + methodological factors (principles/methods for design, implement)
TY,
I used software instead of program because to me (again, we are missing precise definitions for these terms as well) software is more generic. Programs seem to assume the presence of code to execute/interpret while when using MDE we could directly interpret the models. At the same time, software is more than a program (could be a set of programs, components, may need to include databases, use some complex software architecture,…) . I believe MDE can be used to generate software (in this broad sense) and not only programs.
I may be just plain wrong regarding my interpretation of the words program and software but I hope I get what was my intention when choosing these words.
A few general points about this.
1) MDE is not just about producing programs or software. It is also about producing systems (which include programs/software).
2) I like Vincent’s definitions, but I’m not entirely convinced that they get to the essence of the difference between programs and software. A program, ultimately, is a sequence of instructions that can be automatically executed on a machine. Software must include human and organisational context, in my opinion (as well as architecture). I suspect this is why building software is *hard* – the relationship between the thing you’re building (your artefacts) and its context (users, stakeholders, organisation, communities) is constantly changing.
3) Is transformation all that is needed to produce programs/software from models? Part of the answer is the hand-crafting (as pointed out by Vincent), but I suspect there’s more to it than that — there’s validation, etc.
Cheers,
Richard…
jordi, thank you explaining for me! I think I had a little more understanding for your thinking: placing MDE on a broader basis: the ‘software’ in you broad sense, or, like Richard’s points, the ‘systems’. And in this broad sense/context, some times, I would tend to use such the term ‘applications’.
Furthermore, following Richard and Vincent’s comment, I think ‘hand-crafting’ maybe very realistic but, perhaps it‘s a little similar to that programing risen from assembling, or data-processing risen to on a database, maybe, we’ll be had to cut off even a little bit the hand-crafting tail on the same level of modeling…
I like to take an economical hands on approach to modeling and system devellopment. For modelling to be really interesting it has to increase SW productivity directly rather than reducing the process overhead.
Drawing code in UML or developing both a model and transformation is at best neutral. I find UML lacks expressiveness and UML interpretation is a major issue.Customers with time to market and product focus do not want it. It is like creating an automated assemblyline to produce one car.
I like your equation, but you need the architectural component in the equation.
The conceptual models lifts the wail of implementation detail. Much of the technical detail is expansion of archtectural elements. Consider reference counted smart pointers in say C++. If they were to be expanded in a uml class model they would render it unreadable.
As Ty points out, this example or a gc solution is implicit and hidden as composits or aggregation. However just pushing implementation detail of into a single opaque transformation makes it impossible to use. As a designer. you need to know how the design is translated and trust it. You end up sitting like a cartoon illustrator flipping the scene images over and over.
A substantial ammount of archtecture elements is propriety and domain specific. It can still be prolific enough in a product (or product portfolio) to support the cost for the development of a transformation.
To be economically viable transformations must be reuseable. I find that the architectural elements on different abstraction levels fulfils this. Vincent writes 80% is transformable, that is my observation too. Especially new archtectural sulutions needs to be hand coded, proven, matured before they become prolific in a system.
Infact the ones that gains popularity are the ones that should be automated because they leverage development and maintenance.
The manual 20% at a given time are first offs and my customers expertise and know how. They are rarely interested in having their algorithms fragmented in an uml model or expressed in unfamiliar language. In day to to day work I find that transformations m2m can peal away repetive and intellectually monotone work.
A model is the product of applying n archtectural model on a conceptual model and then deusex deusex adding a delta of hand crafted code. It is close to the weaving idea.
I propose that the T( T ( …. T ( coneptual model ) …)) + new architecure + know how code = product code
Excuse my poor spelling, i wrote this on my phone…
I’m biased (having done a bit of work in the space), but I couldn’t agree more, that transformations are going to be of marginal value in a lot of domains (not all) until we can make them usefully and safely reusable.
Reusable transformations have attracted a little bit of attention in the research community, but not nearly as much as they warrant.
I definitely vote for “models = software”, but I’ve been doing Executable UML, nee Shlaer-Mellor, for years.
In Executable UML, modeling, model compilers perform the transformation, and they are 100% reusable. (Actually, they are off-the-shelf software.) They don’t do model-to-model transformation, instead they translate the platform-independent, domain models into a platform-specific output.
I find domain models can give the customer a clear view of how the system is partitioned by subject matter, and a class model, done using subject matter semantics, is very easy to explain.
In the end, Wirth’s equation still stands, because it doesn’t really matter whether the data and algorithms are expressed in pictures or text; they are still representative of data and algorithms.
so you still use transformations, even if in this case is a only a model-to-text transformation 🙂 but I agree that the main point is that models become software (whether using more or less intermediate steps is a different question)
If what you want is to define the *behavior* of a model, I think it is much better to write an interpreter that creates the behavior directly, rather than writing a translator that generates code which is then executed to create the behavior. Transformations that create code are best understood as an optimization of an interpreter. It is also possible to convert an interpreter into a translator by applying partial evaluation (this is a well-known fact). So, I would write it as
Software = models + interpreters
The translators are just optimizations, and are often more complex to write and debug than a simple interpreter.
inspiring discussion, how about the joy of one more equation:
no matter how one defines ‘model’, it always contains some sort of reduction, i.e. it doesn’t contain all the details. However the software needs to carry all the details in order to ‘do the job’. From this perspective:
abstraction + detail = software
Now, how does this relate to the above equations? Don’t know, but just to say the models carry the abstractions and transformation is for the details, seems much too simplistic.
|=
In Wirth’s definition, he excludes the programming language and the compiler (and runtime environment). In that sense, we can still say:
algorithms (represented in models of dynamic behaviour)
+ data structures (represented in models of static structure)
= programs (represented in models)
Wirth makes the simplifying assumption that there is a known fixed language, compiler, and runtime environment. For a particular flavour of Executable UML, that may be as true now as it was for Pascal, but for most other models that can produce full programs, enough of the language and transformations change that we can’t really ignore them.
Also, for most languages Wirth was thinking of, the mapping from the 3GL to Assembler and on to machine code was simple enough that it was a reasonable simplification to use an equals sign, and elide the difference between foo.pas (program source) and foo.exe (executable program). For most model to program generation, that’s no longer so valid, so maybe we should say:
models in languages describing dynamic behaviour
+ models in languages describing static structure
-> executable programs
Where the “+” and “->” are operationalized by transformations and/or interpreters, typically written by the language creators, and running in a particular MDD toolset. We can separate them into their own equation:
1) language workbench + problem domain knowledge
-> domain-specific language + modeling tool for that DSL
2) generation language + generation toolset + solution domain knowledge + domain-specific language
-> domain-specific generator
Personally I think there are more aspects of programs than algorithms and data structures, and a good DSL may blur the difference, allowing us to turn what would elsewhere be tricky algorithm specification into relatively easier data specification. So 1) and 2) are probably more important in Domain-Specific Modeling than Wirth’s equation or its updated versions. Still, the real value only comes when the languages, and one level still higher the language workbenches, are actually used to make models that make programs.
By the way, amazingly it looks like many of my younger colleagues do not know who is Niklaus Wirth nor his famous equation!!! I must be older than I thought ….