Why model?
In my last post, I suggested two problems with contemporary model-driven software development. This article looks at the first of those: why model?
Any technique must offer benefits to justify the overhead of adoption. Commercially that means: deliver better software faster. Benefits can arise from other avenues too, not least “it’s novel / interesting / all the hipsters are doing it”. But let’s focus on the commercial imperative for now. As Grady Booch observed,
The entire history of software engineering is that of the rise in levels of abstraction Click To TweetModelling offers higher level abstractions not found in mainstream programming languages. Even the much-maligned UML provides State Models and Relations as first class constructs. Higher abstractions should enable more efficiency and so deliver better software quicker.
So why isn’t modelling common place?
Why don’t we model?
To answer that we need to look at how to model. There are broadly 3 approaches:
- Formal methods
- Domain-specifc languages
- General purpose modelling languages
Formal methods
Formal methods such as Z and VDM have been around since the 1970s. More recently tool-supported approaches such as TLA+ and Event-B/Rodin have appeared. But if MDx is a backwater in the general software development world, it’s a veritable Amazon(1) in comparison to formal methods adoption.
Formal methods can definitely contribute to the “better software” imperative. Any impact on “faster” is a second order effect however: the models have to be translated into working software by hand. And the learning curve can be steep, requiring a solid foundation in the theory and notation of one or more mathematical disciplines (predicate logic, sets, graphs).
Domain Specific Languages & Models
There has been a resurgence in domain specific approaches in the last few years, as evidenced by the growth in Language Workbenches. Domain-specific approaches can directly address both “better” and “faster”. But they are not without hurdles, both technical and organisational. On the technical front it’s the challenges of language design. Textual approaches (e.g. Spoofax, Xtext, MPS, Rascal) require the designer to understand compiler construction: parsing, linking, semantic analysis, type systems and so on. Graphical approaches such as MetaEdit+ perhaps simplify that. But there’s still the question of designing a language.
The organisational barriers are at least as significant – and independent of the textual/graphical debate. Getting traction for a DSL depends heavily on the organisation’s approach to software. It’s possible in companies building software products, especially those offering related product families. The cost of investing in language design and tooling is justified through repeatability and hence efficiency. But not all software falls into the “product family” bucket. Even when it does, some organisations – and many developers – are nervous about building a proprietary language. Maintainability, recruitment and CV curation can be powerful adversarial forces.
General Purpose Modelling Languages
Mention “modelling language” in the context of software and the UML is never far away. If there was a poll for most debated standard, it – in partnership with its close cousin MDA – would win hands down. No contest.
Why?
Because UML Models aren’t executable but MDA needs them to be.
Actually I should be more specific. The vast majority of UML models are mere sketches. Sketches aren’t working software.
Sketches need lots of human endeavour to translate them into working software. Which isn’t to say they’re bad: a quick diagram on the whiteboard can be invaluable. But it’s a long way from working software. At the height of its hype curve, the UML wasn’t capable of describing precise, executable models(2). Without those, it’s impossible to automate software generation. Without automation, we don’t get better software quicker.
This is the fundamental mistake with MDA:
- An incomplete language intended for sketches is not a viable basis for precise, executable models.
- Without precise models,
- no formal checking can take place. So the impact on “better” is marginal;
- no process automation can take place. So the impact on “faster” is at best nil.
Summing up: MDA didn’t deliver better software quicker. It had the hype and the backing of large organisations. It didn’t stick because, brutally, it didn’t work.
So – in the context of general purpose modelling – let’s be clear about this: as long as a manually intensive process sits between a model and working software, the model is no more valuable than a sketch.
Some will argue the UML now has the constructs required for executable models and can, therefore, support an automated process. But whilst it may have finally won the technical battle, it has emphatically lost the mind share war. It’s not even “not cool”; it’s increasingly just not known.
UML & MDA were the poster children of the model-driven world. They had hype and mind share that no other model driven initiative has come remotely close to. Sadly they turned out to be the Emperor’s new clothes.
UML & MDA turned out to be the Emperor's new clothes Click To TweetThat is not to say that MDx using general purpose languages is fundamentally flawed. Far from it. There are several examples of credible tools based on general purpose modelling: Bridgepoint, Cloudfier and Mendix to name a few. General purpose modelling can address both “better” and “faster” primarily because it enables separation of problem domain and technology concerns.
What’s missing?
In short: an easy on-ramp.
Formal methods are powerful but have a steep learning curve. They can most definitely facilitate “better” software and to some extent “faster”. DSL approaches can enable “better” and “faster” but have both organisational and technical hurdles.
Either can be extremely valuable – but there are challenges in getting there. Users might see the motorway, but there’s a roadblock to navigate first.
General purpose modelling should be the on ramp. Whilst it may not offer all the benefits of formal modelling or DSLs it can come without the barriers.
Unfortunately it’s stigmatised by the UML/MDA debacle. But it can, should, offer the stepping stone. Indeed for many it may be sufficient. We’re missing the things that make it easy. Instant gratification. The things that attract people because they make the process simpler / quicker / rewarding / fun.
Read the five things we need to address to make software modeling mainstream Click To TweetSpecifically,
- whilst there are a plethora of tools for building models, few of them support executable models. Of that few, far fewer still are actually rewarding to use.
- we’re missing the pre-existing models that serve as exemplars. Models that are demonstrably translated into real, working software. Models that can be adapted or reused to meet different requirements.
- we’re missing the translators that turn those models into working software. Automatically, quickly and repeatably. We have the tools to write those translators: we don’t have the translators themselves. At least not robust, industrial quality translators that produce robust, industrial quality software. That can be used by real users or sold to real customers. Results that looks as good as, and function as well as, ‘hand written’ alternatives. Crucially, those translators need to be open for adaptation.
- we’re missing the cohesive environments that make it easy. Environments that don’t need weird hacks or obtuse incantations to make them work. Tools that “just work”. Tools that combine the constituent parts for modelling and translation into a consistent, seamless, industrial-quality experience.
- we’re missing eco-systems that pull these things together to forge communities. Communities that generate interest because they’re doing cool stuff.
What to do?
If modelling is to gain significant presence we need to address the issues above. Despite the reservations of the meta-wizards, the muggles can model. Sure, some will be better than others. But there are plenty average programmers making a decent job of churning out software today. Plenty of those would be happy to work in environments that make the job easier, more rewarding and more productive.
Despite the reservations of the meta-wizards, the muggles can model Click To TweetIt can be done. Mendix is a promising example which addresses all points. Interestingly, Mendix mutes the “Model Driven” message with a much greater emphasis placed on the result – better software quicker. That’s exactly as it should be. But one tool an eco-system does not make. We need more.
People will embrace modelling if there’s a compelling reason to do so. That “compelling reason” is the ability to produce better software quicker. If modelling is to gain popularity, the community must embrace that mantra.
- Amazon the river of course. It is however topical, given that Amazon the company have published papers on their use of TLA+.
- Indeed, Grady Booch has consistently stated that supporting executable modelling was explicitly not a goal for UML. Which appears somewhat at odds with his observation on the history of software.
Featured image credit goes to Wade M
Software Architect and Developer with a deep interest in building better software faster. Long time advocate of model driven techniques.
Many small and new companies will not do no modeling at all. The argument that can be used in order to get people to even create what you here call sketch models, is the following:
By creating a model You as a team need to communicate and create a common understanding of the problem and the solution You are trying to build. In this way, even the simplest of models can help avoid miscommunication and bugs later on.
The vision of executable models is so nice and I believe we will get there some day, also for the general purpose models,. But this vision is very far removed from the reality in many small and medium sized companies that produce software as an add on for another embedded or smart product.
Hi Ulrik, thanks for the comment. I agree many do no modelling at all – at least not “modelling” as we think of it. Rather than starting with sketches though I propose it should be the other way about. Working software is what matters. The focus needs to be on enabling better software quicker. If that’s based on modelling, then modelling will gain traction. I think history has shown that the route from sketch-> executable model hasn’t worked.
I’d rephrase “why isn’t modelling common place?” to “why there are many more non-modellers than modellers ?”
And I’d answer: same reason there are more brick-layers and concrete-pourers than (construction) architects and structural engineers: it takes more investment in learning, then thinking, and responsibility is bigger.
Plenty of improvisation-artisans around, hacking out APIs and lines of code for the body shoppers.
Hi Antonio, thanks for the comment. I’m not sure I agree with the construction analogy though. There’s an implicit assumption that a model-driven approach needs more intellectual capacity than conventional programming. I don’t buy that. For example: contemporary “web style” applications involve significant accidental complexity. In many cases it’s at least as significant as the essential complexity arising from the problem domain. MDx has the potential to better separate those concerns. So it has the potential to make the job simpler. Simpler should reduce the intellectual barrier, not increase it.
I do agree there’s currently a psychological issue in that it’s cool to “hack out an API”. However I think the roots of that stem from producing working software, not from coding C/C++/Java/whatever per se. An approach that makes that easier/quicker/more fun/more cool stands a good chance of gaining traction.
I mentioned ” investment in learning, then thinking, and responsibility”, none of which implies “more intellectual capacity”, but rather may be committement and time (and possibly budget/moneys to buy that time). So I agree with you in not agreeing with the “more intellectual capacity” assumption.
I keep finding developing MDx infrastructures the funniest and coolest discipline around (but thats me), and using them to produce working code the easiest and quickest – possibly because as author of the MDx stack I known them inside-out.
In short: I coulnd not care less anymore about why others do not do modelling, after 25 years doing it. I just do models, use them to communicate and produce working software, and usually leave this recurring story of “why’s” for the Academia, yet this time I got caught in the wake (again).
If you want to know the difference between formal methods and mathematical modelling, come to our workshop today in room Lanrentien 😉 http://mmmde.github.io
I was really tempted to NOT approve your comment (since your workshop conflicts mine CloudMDE one and I dont’ want you to steal my audience) but in the end I´m such a nice guy ..
Where’s the like button? I’m such a nice guy too [citation needed]
As far as, “Results that looks as good as, and function as well as, ‘hand written’ alternatives.”, I’m going to have to use my standard C compiler example. Having the C compiler generate assembly usually doesn’t result in assembly that looks as good, but often results in functions better than ‘hand written’.
I agree that the construction analogy isn’t good. Abstraction requires a change in mindset, but that change is achievable by anyone. (10x rule probably still applies)
Hi Lee, thanks for the comment. By “looks as good as” I was thinking about UI/UX, not the look of the code itself. But on that front I agree with you: when the code becomes a derivative artifact – like assembly today – then the rules about what constitutes “good” change. Loop unrolling, for example, is standard practice in optimising compilers. It would however be rejected by pretty much every coding standard.
I still stand by the Construction Architect and Structural Engineer analogy:
who says that masons (the brick-layers and concrete pourers kind) are not as intelligent as the university degrees ? After all, they build solutions, and almost always they stand for decades, a century or more. What they do not supply is a predictable outcome and quality standard, as they do not really fashion detailed blueprints, nor do they calculate precise load bearing capacities (too bad when an earthquake strikes closeby).
Even though that was not what Scott was referring to, I’d argue the analogy does not hold here, Lee. The audience of assembly code is a computer. If the code does what it is supposed to do, nobody cares what it looks like. The audience for 3GL source code is a human being. Unless the code generator is also compiling the generated source code into object code, and deleting the generated source code as a single step, the generated source code is going to be read by people, either because they need (there are good reasons for that) or just because they want (a trust issue if you will). Generated code that sucks compared to handwritten code is another reason code generators get a bad rap. There are no technical reasons for not generating readable code. The economical reasons for generating code that is hard to read seem weaker to me than the ones (economical or psychological) for generating readable code.
If we’re going to raise the abstraction level it needs to be all-in. Leave very little gaps for reasons to look at the lower-level code. There is no reason to shoot for anything less.
bq. Leave very little gaps for reasons to look at the lower-level code.
Yes. But as long there is still a possibility/need (and I suspect there will always be), I see no reason to produce unreadable code.
We’re going to have to disagree then. I see no reason to put effort into making the code readable. As the model compilers improve, the need will go away and we’ll be arguing about the next level of abstraction.
Can’t reply to the proper comment for some reason.
Lee, one reason that is going to take a long time to go away (only quite after MDD becomes more widely accepted): as an exit strategy. If the code generator generates readable code, people will feel safer that they have a backup plan in case this MDD thing does not work for them. Otherwise, they would have to start from scratch.
Refusal to accept this (and other psychological reasons) will keep MDD a hard sell (and I suspect prevent if from ever becoming mainstream).
Strictly anecdotal, but I’ve never seen readability be a barrier to entry. Most of the problem is change and not seeing a need for change. Same as assembly to C. People are willing to wade through mud until it gets chin deep, before they look for alternatives.
Lee, in the past, I have myself often brought up the idea that the same thing happened in the 50s/60s when we moved to 3GLs. But today, I think this misses one important point: that the modern status quo is much more comfortable than it was back then. Taking the next step forward is going to need to offer more than a raise in the level of abstraction, as the level of abstraction provided by 3GLs is not terrible (comparing to assembly). We need to show it is faster, cheaper, easier, safer, better. Anything we can do to remove obstacles to adoption is a good thing IMO.
My experience going assembly to C was in the late 90s in embedded programming. The level of abstraction in 3GL is terrible. Any large project will show that. Maybe we’ve become so used to it that we can’t see the domain pollution.
I’m starting to wonder if the domain pollution has become such a given in programming that we no longer see separation of concerns as viable. When I started programming, the abstraction steps were already exposed, so 3GL to xtUML seemed obviously alike to assembly to 3GL. When I talk to younger programmers, they may no longer be capable of seeing it, because the domain pollution is considered a given.
Lee, MDD is not the only way to achieve proper separation of concerns. Most are comfortable with framework/runtime-based solutions that allow them to use conventional 3GLs.
I am all for SoC (that is what got me into MDD). But I don’t think it alone serves as motivation for people to leave the status quo (be it technical folks, or their bosses).
Nice article. It has certainly been a difficult problem. I agree with you that people want/need to see the benefits to get them engaged enough to keep learning. The cost to get to the point where one understands modeling enough to see the benefit is still high. I recall how easy the old C hello world was, then object oriented hello world was more complex. The overhead of modeling is more complex than that. Once people see the benefit of developing at a higher level of abstraction one would think people would invest and put modeling in their engineering “tool chest” (that has been the theory). I agree that the technology that is used to implement modeling has not yet risen to the challenge.
Here is a short article that breaks a modeling tool into “layers”, application, semantic, and technology: http://onefact.net/three-layers/
All 3 layers do exist. While all 3 of the layers need work, it is the technology layer where the cohesive environment that helps draw new engineers in lives. This layer can be worked on independently of the others. We need people need to see the value in investing in doing so.
I disagree with some that here would say the masses of programmers are somehow not capable of modeling or are not interested in it. The masses have specific tasks in front of them, and they will accomplish the tasks using the most efficient way they know. When approached with a task a person can often hack together a working “solution” quickly, but engineers know that quick hacks are not usually the most efficient in the long run. I think need to do a better job of showing people why the investment in learning modeling is worthwhile. I am struggling with how best to do so.
In order a company to invest in modeling, at least these factors should be considered:
– the business model of the company – if the company is product oriented (sticks to a single product, constructing and maintaining it for many years, upgrading and extending it) or it is “project oriented” (where the “projects” come and go frequently and each one releases another product, the company just does not care further for it). — The product-oriented companies are usually those that have a well established business, where the software product is just one component of it. These are rather “big” companies, able maintain to software development team(s). Their focus is more on “better”, than the “faster”. Here modeling is a reasonable investment.
— The companies of the “project oriented” kind are “small”, outsourcing, startup companies. They frequently change the direction, technologies, business areas. Usually they focus more on “faster” than on “better”. In my practice in such a company there were even customer requirements for “private technology”, project isolation and no reuse in other projects (for other companies). All these are real obstacles to adopting modeling.
– the product development and life cycle methodology adopted – if the company does “big bang” development (usually following a waterfall-like, one-time development process) or it builds its products iteratively and incrementally. In addition, if the company supports and maintains its products or not.
— “big bang” with (almost) no maintenance is what the project oriented companies tend to. This makes the modeling look like a nice drawing at high cost.
— iterative development sometimes falls to “trial an error” approach, which decreases the value of the formal validation, relying to the “market”, “business”, “customers” and other external factors to do it. Frequent excuse for code fixes is that “the business changed”. This is not a good environment for modeling.
— incremental adds features and components to the product(s), in the same architecture, framework and infrastructure. Here the focus is more on the correctness, which facilitates introducing the modeling and models reuse for generating code. Especially with maintenance of the product, the investment in modeling, integrating of modeling tools with elements of MDA really pays off.
– if the product is for end users or for integration with other software
— the products that integrate with formal software cope with formal interfaces, (rather) clear specifications and APIs. This is mostly formal following external specifications. Here TDD tends to replace the specification as a really valuable practical approach. This leaves less space for modeling.
— the products for end users have to cope with much of uncertainty, not clear requirements and integrate with external complex business processes. In such cases having means for clear communication, simulation, imagining the product / feature to build, having it quickly and easy to change ad hoc during the discussions and interviews, is the real place where the modeling pays off. The models reuse as of above is a real benefit. Having executable models here would be even better.
To sum up: there are many factors to consider when evaluating modeling, MDA, MDx practices, before sentencing them.
There is a wide specter of cases between “small, project-oriented, big-bang development, software/hardware integration organizations even individuals” and “big, product oriented, incremental, maintaining and supporting end-user and business process-integration companies”.
And one factor more to consider – just statistically the voices from the first ones are just prevailing the voices from the other end of the specter, but this does not make the modeling and MDA less valuable in the appropriate context.
For a practical case study please follow http://mdatools.net/
After 20+years of R&D and working with early adopters the fact is the model now is the application; the build environment covering all requirements. Many in the 80s had the vision such as expressed by industry thought leader Naomi Bloom “Writing less code to achieve great business applications was my focus in that 1984 article, and it remains so today. Being able to do this is critical if we’re going to realize the full potential of information technology” “….how those models can become applications without any code being written or even generated”. “If I’m right, you’ll want to be on the agile, models-driven, definitional development side of the moat thus created…..” “…”If your Enterprise vendor isn’t pretty far down this path, their future isn’t very bright”. Naomi goes on to say “It really matters how your vendors build their software, not just what they build” How true!
So what have we created? Well interesting comments about 3GL many have described this new way as 6GL……as Bill Gates in 2008 described http://www.infoworld.com/d/developer-world/gates-talks-declarative-modeling-language-effort-386 when he announced plans to build a declarative modelling capability reducing the need to code calling it the “holy grail of development forever”, “the dream the quest…. but would be in a time frame of 5 to 8 years.” Hmm so where is it? Anybody know? Microsoft tried to patent some key attributes but too late prior art of some 10 years means no one can patent….maybe big vendors find that a bit of a challenge!
So what did we do? Very simple really back to basics how business really works. First stating the obvious all information is created by people! (or machines built by people!). Research established there were less than 13 work task types that address all business requirements with focus on people support – the most important being the UI. Then we realised that as generic capabilities they could be stored in a database ready to be configured (one of those failed MS patents!). Powerful links allow great flexibility and in built rules. Interestingly we only built the “model” graphical designer build after this core capability built by using declarative technique which build v quickly and supports easy change. See this research paper http://www.igi-global.com/chapter/object-model-development-engineering/78620 and you should follow such sites with informative comments http://bpm.com/bpm-today/in-the-forum/what-s-more-important-to-business-today-bpm-as-a-methodology-or-bpm-as-a-technology
I can hear the cries of disbelief how can you handle all needs well here is the list
• Process engine to ensure all works to plan
• Rules engine reflecting real world of compliance
• Calculation engine automating system work
• State engine Real time feed back from any point
• BPM focus on people and their processes
• Workflow everything connected in right order
• Audit trail, events, escalations = control with empowerment
• Rapid prototyping user involvement in build
• Time recording supports activity based costing
• Real time reporting become predictive
• Prebuilt configurable dashboard operational visibility
• Build mash ups one screen multiple data sources
• Linked intelligent Ajax grids enter data only once
• Roles and performers people and machines recognised
• Management hierarchy see who does what and when reallocate work
• Orchestrating business processes and legacy as required in single environment a 21st century approach = agility in software
• Call Web Services wrapped up in a process
• User interface dynamically created linking people, roles, task type and data via forms for specific instances, web or client server
• Content handler and in memory work capability to ensure high performance.
• Pre-built templates for custom documents, letters, e-mails, messages etc dynamically populated with instance specific data and edit capability in browser
• Process and task versioning control
So where are we in “distribution”? As hinted in the title of the post there much resistance to the Model concept but now a reality as real build 6GL capability even a bigger challenge to status quo. Fact was we were decades ahead of the game and so very frustratingly had to “wait” for market to be ready and such forums certainly help but equally important the funding and path to global distribution. We are now very close. Be aware this is highly “disruptive” technology puts into individuals hands the same capabilities as the likes of IBM collection of acquired and thus expensive software at fraction of cost!
As they say watch this space everyone in the “Model” game will be winners….as of course will be the customers with this long overdue step change in business software.
Great article, Scott. One clarification: at the end you seem to be saying that general purpose modeling “should be the on ramp”, with executability, and in a pre-packaged cohesive environment. You offer Mendix as an example of that. I’d claim that Mendix is not general purpose – it can’t generate a lift, a space shuttle, or a digital watch. It’s language concepts are from the world of IT apps, either as web pages or mobile clients with database backends. I agree it handles a broad range of such apps, and certainly that there is a massive amount of such apps that need to be built, and that Mendix is a great “on ramp” to modeling. But I don’t think easy user-level modeling can be achieved with a general purpose modeling language. It’s precisely by being domain-specific that Mendix can make modeling easy and the results executable. Johan den Haan’s interview backs up the reliance on DSLs: see Q5 here http://patternsbasedengineering.net/johan-den-haan/
I would change your mention of “general purpose” to “pre-built domain-specific”. The main differences from language workbenches is then that the user doesn’t first need to analyze their domain and create languages and generators, but rather can get straight into modeling. Obviously economics means there won’t be a pre-built tool for every niche domain, but for the most commonly encountered domains there should be a Mendix.
One of the original theses of Microsoft’s Software Factories was that the language workbenches would be used primarily by software integrators to build pre-packaged tools like Mendix, and that there would be hundreds or thousands of such tools. That clearly hasn’t happened – instead there have been hundreds of companies using language workbenches to built languages for internal use (at least that’s what we’ve seen with MetaEdit+). A more recent direction for us has been companies building a modeling language for configuring and using their product, and shipping an OEM version of MetaEdit+ along with their product – so not strictly speaking internal use, but still tightly integrated with the company making the language.
For the “next Mendix”, it’s an interesting question whether to use a commercial language workbench to get a good modeling tool quickly, or to use an open source modeling framework like GMF, or still lower-level frameworks like GEF or yFiles. My understanding is that Mendix went with the lower-level approach – but I think they started building their tool before they knew there were language workbenches out there. Certainly at their current size they have the resources to work with lower-level components and use them to get what they want. A newer, smaller company might do better with an existing language workbench. Whichever way it happens, I look forward to seeing more executable DSLs out there for people to simply use!
Higher level languages tend to benefit one class of application over others – that goes along with raising the level of abstraction (determining what is an unimportant detail that can be abstracted away depends heavily on the category of software you are trying to enable – what is unimportant in one category of software can be critical in another). However, I don’t think that makes them more like DSLs than GPLs – they are less general, indeed, but they are still quite general. The point of horizontal DSLs is basically to make it easier to address one specific kind of architectural concern (SQL for relational database storage and querying, Swagger for specifying REST APIs), and as such they are about the subject matter of software development (as opposed to some non-software-related domain), and while they can make the lives of software developers easier (what – I don’t need to specify the query plan myself?), they do not try to address the needs of any particular business domain, and as a rule are not meant for non-developers. So I’d not put horizontal DSLs in the same bucket as vertical DSLs – they satisfy very different needs. In the case of Mendix, I’d pose it is a 4GL that employs a few horizontal DSLs.
Long story short, while everyone can see value in DSLs, there is clear value in raising the level of abstraction of GPLs as well, IOW, 4GLs. Even though both approaches promote a raise in the level of abstraction, they are quite different beasts, and suit different audiences and use cases. There is no “best approach”, and we should not be wasting time discussing which is better – without a specific concrete scenario at hand.
I have some past experience in civil engineering and with large software projects for new mining applications involving complex technical applications (3D graphics, optimisation, simulation, hardware interfaces, machine sensors…, lots of developers, analysts…).
Please comment on my perception that the real need is to get users and solution providers to easily see how complex businesses currently do things (with a single truth “wiring diagram” that they easily understand without needing to learn a new language). Secondly the users and solution providers (including programmers) need to easily see how the wiring diagram of an improved business will work (showing the logic of what the people will do and what the new software will do, what rock or data or comms flows along the “wires”, and the logic of how and why it will work). This diagram ideally has a few levels of abstraction for management overview, technical customers, and for developers and engineers to have a clear idea of the architectural framework and sufficient details to get on with development. The devel lies in the details of the software code statements which invariably discover problems, and awaken new needs… that need to be addressed and reflected in the as-built wiring diagram.
Using the builder analogy above: I think it is too ambitious to hope for a langauage that ordinary humans can readily understand (English and some maths) and also be able to generate code. This limitation means that it cannot be used to generated code unless AI technology was on a par with a software developer (still decades away).
Complex systems engineering goes through stage of pre-feasibility, feasibility, definitive… with more detail added until it is eventually constructed. Building house extensions is very different to building sky scrapers. Similarly modelling is not required when an Agile method will do for a simple app. In my opinion UML and BPMN are far to complicated for the people we build systems for. There is no point if we need to translate our models for customers to understand because they cannot conceptualise and comprehend properly. Effective communication between humans is the key challenge? Translating a visual “wiring diagram” that everyone understands and agrees into any number of computer languages is the easy part.
Please let me know what you thing as I am working with a developer on a flexible app that creates wiring diagrams using simple English, maths and image inserts.
From what I understand, you think models/diagrams would be useful but you just don’t like to use UML/BPMN for that. This is perfectly fine, you could create a DSL to express exactly the kind of wiring diagrams you think would be most useful for you
Hi Jordi, I am new to site and the language terms. To me UML is a domain specific language(DSL) for IT people (and maybe systems engineers trying to adopt it). BPMN is a DSL for business analysts. Customers quickly switch off when they see strangle symbols/alphabets. I am arguing that the DSL for everyday people is natural language (English… and maths symbols ) which is perfectly sufficient for pseudo code if detailed programming logic needs to be described to knowledge workers and to programmers.
What do you think of a tool that manages universal everyday language with flows, logic, abstraction etc?
Mark,
I recognize your plea. You may be interested to know that there are a group of us who, for the last thirty years or so have been trying to address the issues you raise. We distinguish the modeling of data flows from the modeling of data structure, because the definitions of data flows will change as the business changes. Models of data structure, however, if properly done will describe the fundamental structure of a business so that most changes to the business can be accommodated by changes in DATA, not changes in STRUCTURE.
It is important to recognize, however, that this kind of data modeling is different from what you may have encountered before. is not about database design, or any kind of technology. It specifically addresses the structure of business information. And, yes, successful practitioners have to be good at aesthetics and follow some specific disciplines for naming (and defining) terms and assertions.
Steven Kelly below mentions a “Domain Specific Language”, which is a term used for a subset of UML. As it happens, I wound up inventing one of those, before I realized it was an official thing (complete with its own acronym!). For years I used an approach invented by Richard Barker and Harry Ellis in the early 1980s. The notation is only simple boxes and lines, where each box represented a class of things of significance to the business, and each line represented a pair of assertions about a relationship between two of those things. Oracle had the original tool that supported that approach, but they got bored and eventually dropped it. Some other tools (notably PowerDesigner) do support it, though.
Unfortunately, UML and the object-oriented design community took over the conversation. UML, as normally practiced is not good for this. It is strongly oriented towards object-oriented design. I was surrounded by promoters, though, who asserted that “UML can do anything…” OK, so let’s try. So, for my last book on data model patterns, (“Enterprise Model Patterns: Describing the World”), I used (my version of–a DSL!) UML.
This differs from UML in one important way: In the “entity/relationship” world, a relationship is a structure, linking two things together. By definition it is defined in terms of both parts. Each relationship name has to be a “predicate”, as in . For UML, as practiced, however, in anticipation of using it to create object-oriented code, each direction in a relationship is a path, to be followed by a program. Among other things, if that relationship is a “property” of the subject class, the object class is not allowed to be part of that property. So the label winds up being a variation on the subject class name, so that the itinerant program can find it.
In my version, the label is a predicate. It does not reproduce in any way the name of the object. The name is constructed so that in each direction, you have a sentence–that can be clearly understood by anyone in that domain.
The second difference between my models and most UML models (and indeed, alas, most other data models), is that I strongly emphasize aesthetics. UML is not ideal in this regard, but you can accomplish the objectives: The purpose of the model is to present assertions about a business to non-technical business people.
When presenting the model to an audience of business people, I begin with a box that represents a thing of significance. An example might be “SALES ORDER”. First of all, how do you define that? No it is not “an order describing a sale”. Try, “a kind of contract, in which our company agrees to provide one or more goods and/or services, in exchange for compensation”. Coming up with meaningful definitions is hard. But it’s important.
Then I add another, with a relationship. “Each SALES ORDER must be composed of one or more LINE ITEMS”. Is that a true statement? Must you have line items specified, in order to have a sales order? In some companies, yes. In some companies, no. I then build up, a few entity types at a time, until a subject area of maybe 10-15 boxes (entity types) are on the drawing. The audience has now been “sucked in” to understanding a moderately complex model.
(By the way, my book on “Enterprise Model Patterns…” is written that way, as well.)
Because my entity/relationship buddies were horrified that I should take up UML (“going over to the dark side”, was the expression, I believe), and my UML buddies were horrified that I should be corrupting their precious notation, I wound up writing a companion book, “UML and Data Modeling: A Reconciliation”. This addresses more coherently the points I tried to make here.
As Jordi said, what you’re suggesting sounds like a DSL. The beauty of a DSL is that for someone in the domain, they don’t really have to learn it to understand it – it uses the concepts they’re already familiar with, and their way of looking at a system (from a high level perspective, not a code perspective).
It’s not too ambitious to hope for this. We’ve seen over a hundred successful cases of DSLs like this, e.g. Polar’s sports heart rate monitors (http://www.metacase.com/cases/polar.html) and Nokia’s mobile phones (the ones that sold billions, not the smart phones – http://www.metacase.com/cases/nokia.html).
Very good point and at core of our original research……20+ years ago. Users must understand just how their processes work and be directly involved in build and encouraged to think of better ways with software now readily supporting. This is all readily achieved via the graphical model. Core to our thinking was to remove coding from this exercise and work the way business actually works in horizontal flow of information across the organisation. Once users see their ideas coming to life without old IT barriers the game changes; people become truly empowered.
So, if the goal is to build an ecosystem of supporting tools (and I mostly agree with this), how do we go about this? Many of the current advocates of MDx are academics; we don’t do well at building and maintaining tool ecosystems. The companies that would be needed to build all of that ecosystem will take exactly the kind of convincing you mention. Catch 22?
Of course, there are some companies / industries out there where modelling is used in some form or other and who produce tools. MetaEdit+ is a good example. But is this enough? At the same time, there seems to be an not unsubstantial part of the developer community who seem to reject even the most basic of abstraction-raising tools (e.g., IDEs); the web development community (perhaps outside the .NET/J2EE space) seems to be a prime example.
I wonder, whether this is also an education issue: How can we show people that it is possible to be agile and still work at a higher level of abstraction? What are the fast and incremental ways of building up systems and the tools required to build them in parallel? Where’s the hacker appeal?
I first started using the ObjectTime modeling tool back in 1998 or so, which generated executable code in C++ from the old ROOM modeling notation, similar to UML. It worked well, for me at least, in the VxWorks environment we were using at the time. But it was rejected by my company because of several issues, most of which were directly related to training.
The developers using the tool were the same people who wrote the spaghetti code the modeled code was intended to replace. The developers tried to port the old code into the models. The modeled state machines were, therefore, also spaghetti and just as hard to work with as the original bad code.
Then there was the bad development process. The developers were doing a CABTAB (code a bit then test a bit) process, which failed miserably before the tool was adopted and failed again after it was put into use.
And on top of that nobody wanted to expend the effort needed to understand the output from the tool or how to integrate it into their other work. The generated C++ code was structured differently than what a human might intuitively write and sometimes debugging it required study. This caused frustration and delays.
In short, use of modeling tools requires intellectual investment by developers and by project management, even when it works well.
ObjectTime was purchased by IBM/Rational and the product eventually faded away. But it worked very well for the embedded telephone switching software I developed despite my colleagues lack of enthusiasm. The delivered system had a very low defect density and remained in use for eight to ten years after creation.
There are for sure social challenges when adopting a MDE approach as your experience shows. A good developer is not necessarily a good modeler and hence s/he may not be the best candidate to use the modeling tools that are supposed to help him do his work
@Lee: Exhibit one. 🙂
@Rapheal I’m not responsible for use of poor tools or following bad processes.
dear all, thanks for all the insightful responses. I can’t do justice to them all in a comment and the thread is already pretty long. I had planned another instalment on “who models” so will try to incorporate some of the points into that. Meantime thanks again for taking the time to respond.
Well I am very puzzled even disappointed… After I put down the 6 GL claim the “holy grail” to effectively remove coding and put business back in charge of business software requirements there are no challenges? Yet the discussion is so critical of the existing 3 & 4GL and other technical approaches/tools? The fact someone actually comes up with a radical yet very simple solutions being ignored. Hmm… human nature’s instinctive reactions on the “new” articulated by Mahatma Gandhi’s are just so true “First they ignore you, then they ridicule you then they fight you, then you win”
Every true innovation that challenges status quo faces this and even more so where big companies dominate…..Maybe MDD/E still at ignore stage I look forward to ridicules usually livens up things….! I once put this to a coder he expressed initial disbelief then said “well yes maybe but not in my life time”….
David, is there anything that people can look at to better understand what you are proposing and form an opinion? This thread is probably not the best place for that, maybe start a thread on the Model Driven Architecture group on LinkedIn, or ask Jordi about the possibility of writing a guest post. Best to include concrete information (screenshots/examples/running code) if you want feedback.
I see a lot of AI in such a solution. If the end result is coding a computer with explicit logic to do things then there is no free lunch – the detailed logic has to be detailed somewhere which means effectively programming with a different albeit high level language. Model builder should stick to creating barely sufficient details that define a framework for programmers to code in detail with their preferred language. Model builders should use the best language of communication that paying customers (and programmers) understand and the obvious choice is natural language (for pseudo code where required).
Trying to get models to generate code is like trying to get architectural models to generate the civil, electrical, process, water, sewage, lighting, heating and ventilation drawing… as a spin off. There is no free lunch. The modeller must stick to their level of abstraction and leave developers to theirs.
What am I missing?
I guess what you are missing is the repeatable empirical evidence that this works. Without that, it’s all “Buy now, it’ll be wonderful!” hype, “It’ll never work” hand-waving, or “I tried it once and it failed/succeeded” overgeneralization.
Here’s an overview of several well-documented cases and the time-tested approach: http://www.slideshare.net/stevekmcc/planning-for-success-in-mdd-33547654. Follow the grey links on the slides for more info. And if you want the full explanation and even more real-world case dissections, there’s always the DSM book: http://dsmbook.com/.
Steven
Ah the ridicule…….making progress!
All Early adopter without exception avoided “IT” as user took control! It is been one of our challenges! Some stats over some 12 years on one used to manage end to end grant management includes means and performance testing for eligibility
Constant change with only one hour shut down in 12 years for major data structure change
75 process maps (226 over life cycle)
2406 associated tasks (5087 over life cycle)
538 user interfaces (forms) (1114 over life cycle)
2682 athletes
143 organisations
76 sports categories
229 disciplines with the sports categories
1275 active awards with a total value of £255M
705 payments per month
5610 scheduled payments managed by the system
All Managed by 4 Administrators
A recent quote from a user working on improving how they work sums up the change in culture Adaptive Software can bring to a business. “Great to meet with you this morning and see the work you have done. It captures all my weird and wonderful ideas and all done without telling me that I am expecting too much!”
Working on overview on how it works and will send to Jordi.
Steven, I was not questioning the evidence that DSLs work. I am just saying it is not the best approach for *every* situation (I am adopter and a fan of both approaches, and depending on the situation, will choose one over another). There are plenty of stories of successful adoption of executable modeling (Shlaer-Mellor folks), even if not as well documented as you guys do. The lack of such supporting material is a marketing failure, not a technical one,. Doing a better job in that front is one of the things Scott calls out for in this very post.
@Steven – And now I realize you were not replying to me (got lost in the comment tree) so feel free to ignore the comment above.
Mark, Rafael
This is the very issue we addressed over 20 years ago how to remove need for coding in business requirements. Fact is business logic never changes so code the supporting logic as generic store in a relation database ready to allow easy customisation by the business analyst or process modeller! Use of declarative from model to database ensures no code generation or compiling needed which makes both iterative build and change easy and quick. Yes a good coder to help with complex calculations algorithms etc but the end to end business process managed and controlled by the “model” builder. Of course then you have the model is the application making it easy for users, mangers and auditors/compliance to see what is actually happening and aided by real time feed back on activity.
Clever coding of course for gadgets special requirements such as air traffic control aircraft systems etc….but enterprise logic helping people in their daily work is now basically commoditised with this MDE declarative approach.
I will work with Jordi on an article to explain in detail if any one wants to build with such knowledge now you know it works and will not be caught out with patent problems…..might take a few years as all the issues I mentioned need to be addressed in the one build environment. However once we are properly funded we will distribute globally allowing collaboration on core code where helpful. Winners are “model”/process thinking people and the end customer!
Hi David, thanks for your helpful comment and explanation.
1) Regarding, “Fact is business logic never changes so code the supporting logic as generic store in a relation database ready to allow easy customisation by the business analyst or process modeller!”
2) Please explain how the MDx appraoch differs from SAP which seems to be able to link and configure generic code to suite a particular customer, presumably using a model ?
Hi Mark
Really do not know about the techie detail on SAP. As Chartered Accountant I have never really bought into the “ERP” concept. Every business is different and ERP just does not reflect how people actually work. The fact that they now are able to “link and configure generic code to suite a particular customer” does not surprise me but is not the answer and no doubt v expensive! Sadly the past decades of old “IT” hard coded inflexible “inside out” systems have created a bit of a “mess”. What we addressed was how to move forward without creating another inflexible time limited “legacy”.
So “IT” tinkering around with existing messy legacy is not the real answer. It was an Accenture executive who articulated very appropriately what we had done. “Wrap the new Adaptive green field Applications around the “brown field” of old legacy…do not try and change these brown fields of legacy! So such use of legacy becomes the slave to the new “outside in” people driven applications. Plan the retirement of such legacy including ERP. Once you have a completed record audit trail etc of creation of all information in the organisation with real time feed back reports do you need double entry book keeping…..NO! So ends need for ERP and the huge cost to support…..and that sums up the difference in business language and of course business now back in control of their processes; all transparently viewed how all actually works in the “model”. I doubt this comes with SAP?
Mark, thanks for the comment. However, I couldn’t disagree more with your assertions.
1. There is no need for AI to support a model-driven approach. Nor black magic, nor violation of the fundamental laws of information theory.
2. There is one, simple observation that underpins the productivity improvements attainable through model-driven development: separation of concerns.
3. The reality is that contemporary development is riven with manual repetition. The state of web application development in particular is parlous. Multiple layers in different, overlapping, non-integrated languages with single conceptual entities woven through like a hair ball.
4. At its most fundamental, MDx does nothing more clever than weave the hairball for you. No information is magically added: it’s just a case of taking the raw materials and following precise instructions. Just like there’s no magic in a conventional compiler doing branch elimination or loop unrolling.
5. All of this is possible, and -as Steven points out- is evidenced. But, critically, it needs the input models to be precise. Note I use ‘precise’ here very specifically to mean ‘complete and unambiguous within the subject matter’. I could not be more opposed to the idea that models should be left as informal, partially-specified constructs for the ‘developers’ to unpick, interpret and code. This is – per my article – the greatest folly of MDA. It was a notorious wrong turning in the evolution of software development. We must learn and move on.
Hi Scott (8 Oct), thanks for your earlier reply.
1) I am starting to get my head around the MDx concepts of this site. I can appreciate the benefit of precise models in domains with repeat patterns/processes, and DSLs that can help generate the code for a specific customer faster and better than if programmed from scratch.
2) I have been influenced by the BPM (and System Engineering) movement which seems to see process models as defining overall/holistic systems using any modelling language (like BPMN or SysUML). Do you see BPM as a higher level of abstraction that your MDx and DSL approaches are guided by, or do you see BPM as a similar deluded low precision approach (like MDA?).