Why model?

In my last post, I suggested two problems with contemporary model-driven software development. This article looks at the first of those: why model?

Any technique must offer benefits to justify the overhead of adoption. Commercially that means: deliver better software faster. Benefits can arise from other avenues too, not least “it’s novel / interesting / all the hipsters are doing it”. But let’s focus on the commercial imperative for now. As Grady Booch observed,

The entire history of software engineering is that of the rise in levels of abstraction Click To Tweet

Modelling offers higher level abstractions not found in mainstream programming languages. Even the much-maligned UML provides State Models and Relations as first class constructs. Higher abstractions should enable more efficiency and so deliver better software quicker.

So why isn’t modelling common place?

Why don’t we model?

To answer that we need to look at how to model. There are broadly 3 approaches:

  1. Formal methods
  2. Domain-specifc languages
  3. General purpose modelling languages

Formal methods

Formal methods such as Z and VDM have been around since the 1970s. More recently tool-supported approaches such as TLA+ and Event-B/Rodin have appeared. But if MDx is a backwater in the general software development world, it’s a veritable Amazon(1) in comparison to formal methods adoption.

Formal methods can definitely contribute to the “better software” imperative. Any impact on “faster” is a second order effect however: the models have to be translated into working software by hand. And the learning curve can be steep, requiring a solid foundation in the theory and notation of one or more mathematical disciplines (predicate logic, sets, graphs).

Domain Specific Languages & Models

There has been a resurgence in domain specific approaches in the last few years, as evidenced by the growth in Language Workbenches. Domain-specific approaches can directly address both “better” and “faster”. But they are not without hurdles, both technical and organisational. On the technical front it’s the challenges of language design. Textual approaches (e.g. Spoofax, Xtext, MPS, Rascal) require the designer to understand compiler construction: parsing, linking, semantic analysis, type systems and so on. Graphical approaches such as MetaEdit+ perhaps simplify that. But there’s still the question of designing a language.

The organisational barriers are at least as significant – and independent of the textual/graphical debate. Getting traction for a DSL depends heavily on the organisation’s approach to software. It’s possible in companies building software products, especially those offering related product families. The cost of investing in language design and tooling is justified through repeatability and hence efficiency. But not all software falls into the “product family” bucket. Even when it does, some organisations – and many developers – are nervous about building a proprietary language. Maintainability, recruitment and CV curation can be powerful adversarial forces.

General Purpose Modelling Languages

Mention “modelling language” in the context of software and the UML is never far away. If there was a poll for most debated standard, it – in partnership with its close cousin MDA – would win hands down. No contest.

Why?

Because UML Models aren’t executable but MDA needs them to be.

Actually I should be more specific. The vast majority of UML models are mere sketches. Sketches aren’t working software.

Sketches need lots of human endeavour to translate them into working software. Which isn’t to say they’re bad: a quick diagram on the whiteboard can be invaluable. But it’s a long way from working software. At the height of its hype curve, the UML wasn’t capable of describing precise, executable models(2). Without those, it’s impossible to automate software generation. Without automation, we don’t get better software quicker.

This is the fundamental mistake with MDA:

  1. An incomplete language intended for sketches is not a viable basis for precise, executable models.
  2. Without precise models,
    1. no formal checking can take place. So the impact on “better” is marginal;
    2. no process automation can take place. So the impact on “faster” is at best nil.

Summing up: MDA didn’t deliver better software quicker. It had the hype and the backing of large organisations. It didn’t stick because, brutally, it didn’t work.

So – in the context of general purpose modelling – let’s be clear about this: as long as a manually intensive process sits between a model and working software, the model is no more valuable than a sketch.

Some will argue the UML now has the constructs required for executable models and can, therefore, support an automated process. But whilst it may have finally won the technical battle, it has emphatically lost the mind share war. It’s not even “not cool”; it’s increasingly just not known.

UML & MDA were the poster children of the model-driven world. They had hype and mind share that no other model driven initiative has come remotely close to. Sadly they turned out to be the Emperor’s new clothes.

UML & MDA turned out to be the Emperor's new clothes Click To Tweet

That is not to say that MDx using general purpose languages is fundamentally flawed. Far from it. There are several examples of credible tools based on general purpose modelling: Bridgepoint, Cloudfier and Mendix to name a few. General purpose modelling can address both “better” and “faster” primarily because it enables separation of problem domain and technology concerns.

What’s missing?

In short: an easy on-ramp.

Formal methods are powerful but have a steep learning curve. They can most definitely facilitate “better” software and to some extent “faster”. DSL approaches can enable “better” and “faster” but have both organisational and technical hurdles.

Either can be extremely valuable – but there are challenges in getting there. Users might see the motorway, but there’s a roadblock to navigate first.

General purpose modelling should be the on ramp. Whilst it may not offer all the benefits of formal modelling or DSLs it can come without the barriers.

Unfortunately it’s stigmatised by the UML/MDA debacle. But it can, should, offer the stepping stone. Indeed for many it may be sufficient. We’re missing the things that make it easy. Instant gratification. The things that attract people because they make the process simpler / quicker / rewarding / fun.

Read the five things we need to address to make software modeling mainstream Click To Tweet

Specifically,

  • whilst there are a plethora of tools for building models, few of them support executable models. Of that few, far fewer still are actually rewarding to use.
  • we’re missing the pre-existing models that serve as exemplars. Models that are demonstrably translated into real, working software. Models that can be adapted or reused to meet different requirements.
  • we’re missing the translators that turn those models into working software. Automatically, quickly and repeatably. We have the tools to write those translators: we don’t have the translators themselves. At least not robust, industrial quality translators that produce robust, industrial quality software. That can be used by real users or sold to real customers. Results that looks as good as, and function as well as, ‘hand written’ alternatives. Crucially, those translators need to be open for adaptation.
  • we’re missing the cohesive environments that make it easy. Environments that don’t need weird hacks or obtuse incantations to make them work. Tools that “just work”. Tools that combine the constituent parts for modelling and translation into a consistent, seamless, industrial-quality experience.
  • we’re missing eco-systems that pull these things together to forge communities. Communities that generate interest because they’re doing cool stuff.

 

What to do?

If modelling is to gain significant presence we need to address the issues above. Despite the reservations of the meta-wizards, the muggles can model. Sure, some will be better than others. But there are plenty average programmers making a decent job of churning out software today. Plenty of those would be happy to work in environments that make the job easier, more rewarding and more productive.

Despite the reservations of the meta-wizards, the muggles can model Click To Tweet

It can be done. Mendix is a promising example which addresses all points. Interestingly, Mendix mutes the “Model Driven” message with a much greater emphasis placed on the result – better software quicker. That’s exactly as it should be. But one tool an eco-system does not make. We need more.

People will embrace modelling if there’s a compelling reason to do so. That “compelling reason” is the ability to produce better software quicker. If modelling is to gain popularity, the community must embrace that mantra.

 


  1. Amazon the river of course. It is however topical, given that Amazon the company have published papers on their use of TLA+.
  2. Indeed, Grady Booch has consistently stated that supporting executable modelling was explicitly not a goal for UML. Which appears somewhat at odds with his observation on the history of software.

Featured image credit goes to Wade M

Want to build better software faster?

Want to build better software faster?

Read about the latest trends on software modeling and low-code development

You have Successfully Subscribed!

Pin It on Pinterest

Share This