In short, I’m afraid the answer is NO (and my belief is the same applies to MDE in general). Recently I had a couple of strong “déjà vu”. The first one while reading the “UML in Practice” ICSE’13 paper and the second one while browsing the tweets of the last MiSE’13 workshop.
The results of the ICSE paper are quite similar to those reported in this 2006 paper Brian Dobing, Jeffrey Parsons: How UML is used. Commun. ACM 49(5): 109-113 and I really think that if I have said that the MiSE discussion was an (anonymized) historic report from an event taking place 10 years ago few had noticed the trick.
Sure, one thing has indeed changed in these last decade. We have managed to publish an additional few hundreds of research papers on these subjects. I’m not sure what is worst, the fact that the a large part of the industry does not seem to realize the benefits of modeling (yes, I’m implying I stil believe they exist, obviously when following the “right amount of modeling” principle) or the fact that so many research papers have not had any positive impact on the adoption of UML/MDE
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
Interestingly enough, UML and MDA were built on some totally unwanted coupling (hint: OO is the problem, not the solution) and misplaced decoupling (hint: the code is the model). There will be a day where the industry will look back and wonder how it could have been so wrong, for so long on an approach that touted itself as being the light of the industry, capable of untangling the darkest problems.
“OO is the problem, not the solution”
“the code is the model”
Can you expand?
It’s difficult to elaborate my answers as “comments”.
I have written this post on Metaprogramming which should explain why the code it the model: http://www.ebpml.org/blog2/index.php/2013/05/17/what-if-barbara-liskov-had
And that one that hints that abstract data types may not the foundation of software engineering: http://www.ebpml.org/blog2/index.php/2013/05/20/programming-with-abstract-state-types
I explain here that there are seven semantics which are universal, absolutely universal (hence part of UML), but unfortunately UML waters them down in an extremely complex metamodel: http://www.ebpml.org/blog2/index.php/2013/05/23/an-introduction-to-bolt-video
The path forward is to dissociate these seven semantics from Software Engineering and rearchitect “programming” into “metaprogramming” while not being afraid of surfacing “state” instead of encapsulating it.
Sorry, I don’t have much time to create a more specific answer.
Ah! Everyone has their own language. Where to then but meta-meta and meta-meta-meta, …? I don’t see the advantage. Everyone is already writing their own software in the same languages, so I shudder to think of changing jobs when everyone’s writing their own software in their own language.
I do realize that languages could be industry-topical, but I’ve experienced too much corporate ego to expect such a utopian vision to come to fruition. At best we come to a collection of industry-topical “UM(eta)L”s.
I guess you don’t understand what I am talking about. Every developer translates its conceptual view into a given language. This process is broken and IMHO cannot be fixed, we have tried for the last 50 years without making any kind of progress. One could argue that Software Engineering’s progress has only been driven by hardware progress.
To make progress, we need to acknowledge that programming needs to happen in three dimensions:
– structural (conceptual)
– logical (business logic)
– physical (boiler plate)
It is not until we clearly decouple these 3 dimensions that we will make progress. Thinking that someone else’s structure is a good fit for a wide range of solution is native at best. Thinking that mixing logical and physical code makes sense is delusional at best, …
… but what do I know?
@Jean-Jacques Dubray thanks for your answer to my question. Read your links and similarly answering everything in a comment isn’t really appropriate, so here’s a summary of my reactions.
1. Wholeheartedly agree with separating “code generation” out. I prefer to think of this in the Shlaer-Mellor sense of separating model(s) of the domain(s) from the software architecture. But I think the principle is the same. It also neatly answers one of the points in your links: how to deal with the plethora of buzzword tech that permeates software development, particularly IT: xml, SOA, sql/nosql, and so on. These are concerns of the software architecture – not the business problem and hence not the domain model.
2. Splitting “domain modelling” into 2 elements is less clear. I harbour some of Lee’s reservations about the wisdom of everyone having their own language. Like everything else, DSLs/metaprogramming have their uses and dangers. I have come across several domains where a good, stable model is one level more general than the specific need at hand. And therefore it’s necessary to ‘instantiate’ or ‘configure’ the model to address the specific problem. Having a domain-specific notation to do that is helpful in many circumstances, instead of simply creating object instances.
Against that: there are recurring constructs that creep into many DSL activities. Things, attributes and relations to describe structure; state to describe change over time; and processing to effect changes to state. While OO (and I mean the OO model – e.g. as exemplified in Shlaer-Mellor OOA – and not as implemented in 3GLs) /might/ be too constraining, equally BNF is arguably too general. I have found SMOOA really very good for describing domains in a way that (a) is abstract of the underlying software architecture, (b) is precise enough for automated translation and (c) can be reviewed with stakeholders. The only limitation is lack of in-built support for domain-specific notation for more general domains as described above.
Interesting topic anyway and thanks again for your reply.
Scott,
thank you so much for investigating my posts. The reason for splitting the business logic into a “clear” programming model and metadata is based on my experience that if the people who produce the programming model and the people who use it (to write business logic) are not the same, then all kinds of issues pop up.
When you need to express some business logic there will always be the decision:
a) shall expand my programming model (to make it easy to express the business logic)
b) that’s business-as-usual, I can use the existing programming model/conceptual level to express the business logic that I need to express it
I no longer believe that some people can design tools, DSLs / Programming model and ship them over the fence, I strongly believe that programming needs to happen in these dimensions simultaneously, with the same people / team working in each dimensions. Ultimately there will be commonalities in the same company or even industry of course, but rigid DSLs (designed far away from where they are used) don’t fly.
I have been doing and teaching MDE since 1990 (without knowing it was called like that by then). When I discovered Interface Builder in 1990, it changed my mind for ever, and I could never think outside of DSLs ever since, but after trying to build many of them (including my latest one http://www.canappi.com) I came to the conclusion that we needed to move to Metaprogramming (with external DSLs, I am not a big fan of tying metaprogramming to any technology).
I hope that answer your question.
JJ-
The (3GL) code is NOT the model! The code is the product of all the models. Has some new paradigm succeeded OT, or are we moving backwards toward previously discredited functional handling of systems? (Hint: coupling, brittle flows, etc.)
OO is often a shortcuts and also somehow a trip.
From code to model or rising abstract level, nothing is indeed newer then the CASE, 4GL era.
Some thing is in fact power driven, neither actually by markets or practice, nor academic research or theory.
😉
Reading the first paper shows how primitive the sample set was with respect to software development. The choices in the survey seemed to be based on a primitive understanding of model-driven development. e.g., code generation implies modeling occurs after design.
One question about the persistence of these findings concerns whether the flaw lies in the UML specification. Let’s face it, there was no path to MDA until the Action Semantics were added, and that addition brought little understanding of usage. fUML has added some guidance, but still suffers from some of the same laxity as in the broader UML. Were we really worse off in the days of multiple notations? Maybe UML is the Microsoft Windows of modeling notations; we were forced into using it, and now face a long struggle back to something better.
If market penetration of UML was similar to that of Microsoft Windows I’d be really really happy 🙂
I strongly agree with the last paragraph of your post, Jordi: “Sure, one thing has indeed changed in these last decade. We have managed to publish an additional few hundreds research papers on these subjects. […] so many research papers have not had any positive impact on the adoption of UML”.
This “UML in Practice” ICSE’13 paper lacks in my opinion any scientific basis. Its selection of respondents is not statistically significant, and it moves from wrong and arbitrary assumptions: the same idea of a hypothetical “full use of the UML” reveals that the author does not really understand what the UML is and what its purposes are.
The UML may be used as an aid to think about a problem (draw shapes and lines on a sheet of paper) OR to communicate something to someone with some specific intent OR in a model driven code generation approach. In any of these different usage contexts, only selected uses of the UML make sense.
Adriano,
just compare BPMN with UML. If a standard is not adopted “as intended” it probably means that it was not designed properly. What the original editors of the spec had in mind, a Visio stencil, was the least of their target.
The scope of BPMN, modeling business processes, is minimal if compared with the scope of UML, modeling every possible software system from every possible point of view.
That a standard is not adopted “as intended”, or that it is “designed properly” are respectable opinions and subjective judgments, but these are not facts. Intended by whom? And, what is the meaning of “properly”?
I don’t know if this matches the original intent, but I would define designed properly as usable, reusable, and extensible. Usable means it is easy to work into the product, reusable means it is easy to work into another like product, and extensible means it is easy to work into an entirely different product.
In order to achieve this, you have to be able to precisely identify the problem space that you are designing. i.e., good separation of concerns
@Adriano: what do you mean by “selection was not statistically significant”? Do you mean the population of software developers was not properly represented in Petre’s paper? This seems like an irrelevant detail to me. First off, we will never be able to agree on what a software developer is. Secondly, even if we could, we would never be able to find a representative sample that would fill out a survey, which seems to be what you have in mind.
The best way to argue against results like these, if you disagree, is to run a similar study, securing interviews with a large number of professionals in various industries and roles, and show that Petre’s results aren’t replicable. Best case, you somehow track down a majority of people who will vouch for the criticality of UML in their work, be it for communication, documentation or analysis.
However, given that her findings accord with what several other studies have shown, AND with the folk wisdom industry has (namely, that UML is not that useful), I’m inclined to believe her results have merit.
@Neil: I remember Steve Easterbrook complaining of how difficult it was to publish a replication of a previous experiment since journals/conferences in ou area didn´t seem to see the value on it. Has the situation changed?
@Jordi: my experience is no. The majority of our conferences and journals do not seem to accept (or guide reviewers and editors to accept) replication of studies. An exception now seems to be ICSE, which can only be a good thing (see http://2014.icse-conferences.org/research, “Replications are welcome”). Then again, who can afford to go to ICSE the way it’s going? 🙂
Unfortunately our research systems in CS/SE are in rather too much of a mess for replications to take off. As a community we seem to emphasise accepting research papers/grants that build something new (theory, tool, method etc) – synthesis and replication is, at least in reviewing (of grants/papers) always seems to emphasise “what’s the new thing they’ve done”. We need to change this, at the reviewer/PC member/PC chair/grant panel chair/funding body director level. This is not impossible but it needs a concerted effort. ICSE’s move is a good one, and I hope the reviewers, PC members, program board members and chairs really succeed with it.
The biggest battle will be with people who make decisions about funding. An ICSE’14 replication paper would likely be considered unsubmittable to the UK’s REF (research assessment) exercise. There, research papers are judged on novelty/originality, rigour and impact. Such a paper would almost certainly score highly on rigour, but quite possibly low on originality (“You are confirming known results – shame on you!”) and impact (“This won’t change how people think or behave, will it?”). Part of this is a cultural thing – such replication studies are accepted in other disciplines. Maybe the REF panels who decide on such things need to be more interdisciplinary: get someone from medicine on the CS/SE REF panel to challenge all the formal methods people who will try to downgrade a replication paper because it doesn’t develop a new theory of mobile communications over ad-hoc networks 🙂
Rant over 🙂
I don’t think it would have mattered if there was a statistically significant number of respondents, the survey was flawed. It placed assumptions on the method, that I stated above, were primitive compared to state of the art. OTOH, a less bias survey may have shown that the majority of usage is primitive.
Adriono,
yes, you are correct. BPMN after all is only a 300+ pages specification. What was I thinking?
We of course have no understanding of concepts like scope and product management and when companies design products, they use a random process, they have no consumer in mind and no specific purpose. I believe the greeks had a word for that “spontaneous generation”.
UML grew from OO and was first designed to solve OO problems. That lineage confused everything as we realized later that very few solutions could be built on top of OO (SOA, BPM, EDA, B2B, Relational Data Models, Extensible Data Structures … all come to mind). UML tried to adjust, but too little, too late. The damage is done.
Personally, I am not too surprised that trying to serve an infinite scope from the wrong foundation, you get pretty much nowhere.
JJ-
Some interesting and thought provoking responses to quite a reasonable article I thought.
For my 2 pence, the first thing I want to reiterate is that a model is something used to conceptualise an entity without an explicit reference to the detail of the real thing. After all, then we hit that famous Bonini paradox (as the model of the problem space becomes more and more realistic, it becomes just as hard to solve as the real thing it is modelling).
As stated already, it allows us to reason about that entity. This is very distinct to the representation or ‘viewpoint’ that someone has of their concerns within the system. Source code is NOT the model, BPMN is NOT the model and UML diagrams are NOT the model. These are all representations of the model from some stakeholder’s viewpoint (I tend to include IT and development staff as stakeholders in the system). The model itself is softer than software. Meta-meta-… programming would be the way to way to go to keep having to reason about these ever-increasing abstract models, but the vast majority of the time, a more useful approach is the separation of concerns that Jordi and JJ have commented on. After all, Kruchten’s 4+1 views are a manifestation of exactly that, where the three ‘views’ mentioned by JJ are augmented with a process view (most useful for viewing multi-process applications and parallel threads of work) and also guided by the business scenarios (the ‘+1’) within the system which over-arches the other 4 views. However, the principle is still the same.
For example, as a rough guide, an architect might use a set of UML diagrams to represent those views from their viewpoint, a developer will see these as code elements such as workflow, methods calls, DLLs/SOs/Services/Apps, threads/executables etc. concentrated around the development and physical views, a business analyst would see these views as capability enablers – concentrating around logical and scenario views (including BPMN, HLD, Roles) and a technology services staff member would see these views as a platform and a dynamic system, focusing around process and physical views. So in principle, there are three main ‘pathways’ through the views to the physical manifestation.
1) Developers = Scenario->Logical->Development->Physical
2) Business Analyst = Scenario->Logical->Process (optional)->Physical
3) Tech services = Scenario->Process->Physical
For me, as a dev/architect, UML is a fantastic tool for me to quickly throw up some ideas. Sure it was derived from OO software. However, as a point of completeness, we have to remember that OO itself was a solution to a problem of modelling objects as in real life. In truth, every organic or synthetic system is composed of statics and dynamics and OO melded the two together just like we have in real life. HOwever, it is certainly not perfect for all types of viewpoint. So we have to ask ourselves the question what representation would be correct for the domain at hand?
Despite my love for UML, I don’t think that is perfect for every single scenario when moving towards a truly applicable MDD ethos. It is just a language for developers in the domain of software development to communicate with by representing models. After all, what does UML mean to a water services technician? Why should s/he care about UML? What about electronic engineers who use electronic schema, VHDL and PCB wiring diagrams? What all these fields have had for decades are effectively manifestations of DSLs for their domain, which are interpreted/compiled/processed by a processor of some sort, to deliver the actual thing. Like a compiler (which delivers software) is often bootstrapped from an assembler or another language, the DSL translator is bootstrapped from elements of software, but it wasn’t always the case (machine code, which is a bit more than glorified arithmetic, used to be conducted by hand 😉
EA
I haven’t seen DSLs in practice likened to electronic schematics. Most of what I’ve seen presented would be like a schematic notation that depended on the product usage. e.g., bank computers would use a different electronic notation than defense computers. A true electronic schematic analogy would be a notation that represents the software domain. UML doesn’t encompass everything (e.g., complex algorithms), so it isn’t the DSL of the software domain. Complex algorithms are already covered by mathematical notation, so what’s really needed for them is a math notation to machine code compiler.
Not sure I agree. That definitely seems the wrong way round. A domain specific language is a language meaningful in the domain of delivery, which can then be delivered through some process. In the electronics analogy, why would a microelectronics systems engineer wan’t to learn to code just to deliver a circuit which s/he has to then translate to a schematic to fabricate anyway?
As much as I am ashamed to say it, even MS have cottoned on to the idea 😉
http://msdn.microsoft.com/en-us/library/bb126278.aspx
However, I don’t disagree that electronic schematics are different for different contexts. This doesn’t just vary depending on whether you are wiring a house or a PCB, but even the notation itself hasn’t been standard across geographies. Luckily, UML doesn’t have a problem with geography 🙂
UML is a DSL for the domain. It just isn’t a developer’s view of that domain. Architects often rely on the developers to make that transition and code is the best way for software developers to do that. When working in electronic domains, you have different views just like you do in software. Package/Component diagrams are akin to block diagrams in electronic systems engineering. They don’t wire every single pin to a component somewhere else. The concepts of a bus, memory, processor, I/O etc. are represented in the language by aggregated blocks.
UML does have a ‘math’ notation to machine code ‘compiler’. You do it through OCL verifiers and the language is OCL.
I personally happen to like formal specification. I like VDM, Z and OCL. Again, even MS Research delivered Spec# to validate systems this way and it isn’t a huge step to shift it to execute blocks of code by tying it into a object-functional language and deploying through the CodeDOM on that platform.
EA
EA,
the fundamental problem, so to speak, to verification is that we have no formal way to define a problem statement. So how can you verify what you cannot express?
Now, if you think of it, there is actually a fairly simple structure to problem definition, which in turns ties nicely to BDD. A problem can be expressed as:
a) a missing transition between two states
b) or an unwanted transition between two states (which we can try to lower the probability of happening, compensate, …)
I wrote a post describing how “problem statements” integrate in traditional software engineering process and practices and it seems to land very nicely (http://www.ebpml.org/blog2/index.php/2013/04/26/reinventing-agile-from-value-to)
Again, the interesting aspect of problem statements is that they surface “state” in a deliberate way (just like BDD). It seems to be that state is the most undervalue and to be frank abused concept in software engineering.
Maybe the solution is here beneath our eyes, the fondation of software engineering is state, not computing. But again, what do I know?
JJ-
Hi JJ,
Having read your blog and what you have written above, I agree with large proportions of your blog post and I absolutely agree that state is totally misunderstood and misused in the software world. So are side effects (which can lead to the unwanted or even invalid transitions you speak of).
However, firstly, BDD is NOT a specification mechanism with anywhere near the rigour of VDM, Z and OCL. Indeed, when it gets to the code, unlike formal methods, when BDD languages run through verifiers, using languages such as Gherkin can introduce side-effects into the development process (for example, SpecFlow on .NET. Step files are simply coded in the imperative language they test. If the language can allow the introduction of side effects, so can the tests). BDD and Gherkin in particular are ways to close the communication gap between the client and the dev. Nothing more. Don’t get me wrong, it’s a trade off, but one that is often desirable to gain a clearer understanding of the requirements from the user who can more or less articulate it in that language.
To link to your comment about state, as I often spout, a system is a very simple thing to define mathematically. It is where S(0) is the initial condition and G is a graph of the states (in turn defined by the usual G = ). At that level, there is literally nothing else. Remember that a graph is another graph’s sub-graph, just like a system is another system’s subsystem. Hence, the S(0) in each transition is an OCL pre-condition (A Gherkin ‘Given’) an event occurs (‘When’) and at the end, a post-condition state occurs (‘Then’). We have everything we need to define any system in IT, Nature, Mathematics, Anthropology anything! As long as you can formulate the problem in mathematical terms, you can reason with it, even if the universe of discourse cannot be both complete and consistent (but there is no such practical manifestation as an infinite system with a universal truth in IT anyway – We can’t even represent floating points accurately enough to be ‘real’). The problem lies with the IT industry not being able/having the skills to formulate the problem, not that we cannot formulate the problem.
In IT, UML covers that very nicely with it’s state chart and as you evolve it, it amalgamates with the activity diagram you originally built from the use case and you should have a complete logical process by that point. To test a system we have to link a post-condition to the end state (which you can define in your blog post) and test that. Additionally, by using state charts, the development of the software and say, the business itself moves through several states.
I don’t want to move to hijacking this thread with your blog, but moving businesses on using the articulation of a) to e) in the perma-link is something that TOGAF does fairly well already. As it happens, as you get towards the end, you are moving in the direction if it as well. It works towards some target architecture through a series of transitions of the business (I argue “is there ever a target, or do we only ever work through transitions as organisations adapt to the landscape?”). It defines problem statements, works towards the problem statement from a business focused perspective (something us techies can’t do adequately), articulates the business strategy, the problem statement, the solution and the verification using the same framework (through different points in the enterprise continuum), derives ‘organisational IQ’ and maturity from the problems being solved every day. The one thing I think it could be stronger on is the definition of the processes. TOGAF and ArchiMate do not have an OCL like language as of yet. A hole in the net perhaps? Feel free to put that to the standards committee or build a new one. However, if it turns up looking like Gherkin or BOLT, I will personally slap Dan North in the face with a wet fish… metaphorically speaking, it might not be a fish, or a slap, or me, or Dan North 😉
So let’s suppose we want to head North (pun inteded). Agile development is not the theodolite that measures a straight(ish) line from A to B. It is like the ancient way of surveying by sticking forked poles into the ground where you looked through the forks at the top of the pole only as far as your next pole’s position (future ticket/sprint) and back at your previous one (retrospective) to make sure you are roughly in the direction you want to go. It ‘swims’ kind of fish-like towards North 🙂
As it happens, that is the nature of agile development. As soon as you put that first peg (read story) in the ground, as you say, the subset of all possible directions suddenly becomes smaller and by quite a way. Hence the importance of the story capturing everything to do with it – But stories don’t capture anywhere near that. If you keep heading in that direction with the poles in the same manner, towards some end location, each time the deviation from the direction gets smaller. This has good implications for such concepts as the cone of uncertainty (risk) and delivering the biggest bang for buck (sensitivity – Which can be positive). However, it requires discipline to keep looking through the forks back and forth. Failure to do either, fails the whole line. It isn’t anywhere near as accurate as a theodolite and if you don’t have a compass and it is a cloudy day, you might not even be pointing North at all. Undoing all that work if you have it wrong is one of the many reasons why agile methods could result in MORE waste than structured methods, not less. Hence why it is so important to have the clients actively on board. It simply doesn’t work without it.
EA
wow ! thank you so much for reading my ramblings in such details and crafting such a thoughtful response.
It’s easy for me to hit UML on the head since pretty much anything I learned in the last 15 years or so can be traced directly to UML. But in the end, the real issue with UML is the “one-size-fits-all” approach. I don’t dispute that OCL has the correct semantics to support verification, though, to be frank I have not looked at it in detail (see below why).
Allow me to shift gear and take the discussion to a different (but related) direction. Jordi will let us know if we should take this thread somewhere else.
So here is my train of thoughts:
>> The problem lies with the IT industry not being able/having the skills to formulate the problem, not that we
>> cannot formulate the problem.
1) I agree to a certain extent. This may not be a question of skills but more a question of culture, or even beyond…
For sure, as humans we think in terms of actions, not states (a question of evolution if you ask me), so it goes beyond skills. It’s usually a violent exercise to force your brain to identify the states.
>> In IT, UML covers that very nicely with it’s state chart and as you evolve it, […]
2) To be honest I have less assurance than you about that paragraph…
It really amounts to me that you are claiming that because we understand quantum mechanics, we can infer exactly what the universe looks like. There is of course a connection, but I am not sure the connection is that clear or even accessible for both quantum mechanics and UML.
>> Feel free to put that to the standards committee or build a new one.
3) ha … BOLT is precisely not a standard at all….
So let’s shift the discussion here if you agree, because I think this is really important.
If you read the book – B = mc2 (which is free in most part of the world, if not let me know, I’ll try to make it free), I explain that somehow Winograd and Flores broke a symmetry in the 80s with their book on Computer Cognition (conversation for action – http://hci.stanford.edu/winograd/papers/language-action.html) and everyone went one way, the only way, the modeling way, i.e. we can and we should decompose everything with (enough, possilby ultimate) precision. … And we landed on UML, TOGAF, BPMN, you name it … and kept adding precision until everyone got sick of it. Not because of “skills”, but because of the rigidity of the model, you just can’t speak like you model and you can’t model like you speak.
Don’t get me wrong, I not saying that there isn’t value there, but there is equal value, if not more, to go the other way, to leave conversations at the level of conversations and simply weave the conversation in a simple (but not simpler) semantic structure, not a la RDF or OWL, but using the seven semantics I identified in BOLT that are somewhat but not exactly a subset of UML.
The fundamental issue, the fundamental break in symmetry that Winograd and Flores induced (involuntarily) is that semantics focused only on information modeling only (RDF is about “representing information about resources”), Modeling went a bit further, but let’s be honest it is 90+% about information modeling too, even MDA and the whole DSL movement reduces programming to a form of Information modeling. I know people will disagree, but that’s ok.
From there everything when South (or North, or just maybe nowhere, if that can be defined as a destination). and interestingly Linguistic fell in the “algorithmic” crack, because of the AI / Cognition focus.
Overal, this “conversation for action” created an awful factoring of these 3 fields for 30 years, when they should have been and could have been nicely articulated if people had looked at the symmetry of the “conversation for action”, yes one could “model” it, but to be frank people could have also seen that there seemed to be a “fabric” behind the conversation, not a “model”.
IMVHO, if you rethink the “conversation for action” in those terms, linguistic, semantics and modeling can be re-founded around that very fabric (the 7 semantics identified in BOLT – the use of the verb “identify” is intentional, say as opposed to invented). What I mean by that is that every conversation can be “approximated” (not modeled the distinction is fundamental) with these 7 semantics, occasionally, people might extend them for convenience, but I rarely, very rarely need to. This approximation has tremendous, tremendous, value to clarify the conversation an articulate cognitive processes and bolt on models to it.
I understand that this last paragraph amounts to throwing a big rock in the pound, possibly a moon size rock, that will go as far as completely shattering the (IMHO flawed) algorithmic view of the brain. I may well be burned for this or at least laughed out (I’ll take the risk since you seem to really be able to understand what I mean here).
When I finished writing the draft of the book, Jack Greenfield asked me, how did BOLT (at the time it was not called BOLT) related to the Conversation for Action, then I could not stop and still cannot stop thinking that these seven semantics are at the foundation of these three disciplines, and it is not until we re-found them properly that we will make progress. Then, and only then the skill problem will disappear, because we will have unified the way we speak, reason and model (I don’t think this unification will be isomorphic, but I may be wrong). As you probably noted already, there is no “algorithmy” in that foundation.
PS: I agree wholeheartedly with your comments on Agile. Dan North sure knows his Venn diagrams. Nothing like getting lost in a dense jungle and pivoting to a new direction/strategy to understand what Agile can potentially do to your organization. But of course, not all IT project look like a jungle…
If the culture of the IT industry is to not work through a process where UML is at the core or do it once, hate it and never do it again, then they never gain the skilful usage that comes from the practise, in the same way they do with code based processes such as those espoused in agile methods. I sit on both sides, in heavyweight and an agile spaces and agile has taken off precisely because of the culture and not because it necessarily produces ‘better quality’ software deliverables than a well followed heavyweight method. At best it produces just good enough quality for what was needed at the time to deliver roughly the optimum business value, which is often enough.
I am definitely not inferring that the universe can be gauged precisely from quantum mechanics. Quantum mechanics, as you are eluding to, is an incomplete definition of the universe. So is Newtonian mechanics. Unifying them via something like M-theory or whatever the flavour of the year is now, aims to give an overarching theory to all these others. If you think only in quantum mechanics, then you are going to get lost. Similarly with Newtonian mechanics. That was my criticism of agile. Agile in practise only considers the programmer’s view (and psychology). UML considers a different set of concerns than the computer’s or the devs. The biz person considers neither. They have their own.
FWIW, I prefer theodolites personally, but it does require the ground not move whilst I use it 🙂
UML does NOT claim to solve every problem, for any system (organic or mechanical), from any viewpoint. It solves concerns and allows a reasoning mechanism for software systems. Indeed, TOGAF also doesn’t claim to infer anything about lowest level solution concerns. It is an EA framework and not a solution framework. So models and methods such as UML, BPMN, ITIL, CRISC, ATAM, Agile etc. can all fit along side it in the solution architecture space and not just that, it is actively encouraged. Since you have to tailor that EA framework to work at your particular organisation. It takes it from the ‘foundation architecture’ through ‘common systems’, ‘industry’ and ‘organisation specific’ architectures, which effectively take it from the general to the specific.
Any system at all, even meta-systems, can be defined using the basis I stated previously. Let’s use a simplistic way of constructive proof and ask you to show me any deterministic system that doesn’t have a static and a dynamic element, which can’t be defined with just a process and an initial condition.
As humans, we actually do think in concepts. Whether a man is sitting, standing, laying down, dancing etc. they are still a man. That is the concept of a man , without the details (http://www.scientificamerican.com/article.cfm?id=single-brain-cell-stores-single-concept). We store this as a model, whichever way it is encoded (usually by some synaptic network, structure and function yet known and potentially unpredictable way, given the differences in neighbouring neuronal structures). However, compared to the short term (RAM) memory of a computer, us humans are really stupid. We can on average, only remember 7 +2 or -2 elements. So when we go down the route of making things increasingly more specific to the organisation, you do risk the Bonini paradox again and even if the dev world didn’t get sick of it, they still have a problem just as hard to solve as the original problem. Hence, we have different people doing different things, at different levels just to cope.
However, I certainly agree that we don’t model the way we think nor vice versa. However, nobody thinks exactly the same way. If there is anything that the IT world has shown me, is it that. Some people are more linguistically able, others more mathematically able and yet more are spatially able etc. It follows Garden’s theory of multiple intellects and the way they reason will be more naturally biased to their specific way of conceptualising. Some with words, some spatially, some mathematically etc. you will get a disconnect every time there is a concept which doesn’t meld nicely into the individual’s reasoning mechanism. You see this with people’s personalities as well, which is why matching the ‘de Bono’ or Myers-Briggs personalities is so important. It is another thing TOGAF happens to acknowledge and recommend it is considered the stakeholder management/communication plans (some tailored TOGAF systems match ‘colours of thought’). If they are not matched, communication is less effective and can be destructive to the relationships as a whole (and hence the resulting systems). Sometimes you’ll never convince anyone, even in the same industry.
EA
>> It solves concerns and allows a reasoning mechanism for software systems.
If I understand your argument correctly you meant “for all software systems” (but not for all or any type of systems). hum… so why did we have to create BPMN after UML and they have pretty much nothing in common? I understand that U stands for unified, not universal.
(If you didn’t mean for “all” software systems, then for which ones?)
So either BPMN was not needed or UML is missing at least one very large conceptual area in terms of concerns it can reach and reasoning it can support.
We are at the core of the argument, either the scope of UML is too big and we have to differ focusing on such a broad scope “solving concerns and allowing a reasoning mechanism for *all* software system”, or if we want to address such a broad scope we can only (and have to admit that we can only) solve some concerns and allow some reasoning for all software systems. For everything else, you are on your own. In that case, it would be good to define the scope of UML. Our industry is sick of these “sociopathic” remedies that only make a very few rich and everyone else poor.
What did “unification” mean in UML? was a unification between 3 popular methods the correct foundation? or should UML have started from scratch? I started my IT career on 1997, after writing NeXTStep based software for 7 years in the area of industrial process controls. Actually the first time I heard about UML was during an interview in 1997 as I was transitioning from these two worlds. I had lived in a world where clearly “the object was the advantage”, to enter a world where objects made absolutely no sense at all. Little did I know then, that for the following 15 years I would see people desperately trying to create “the” customer class and fail to understand why it didn’t work. There is absolutely nothing, even today, in UML or in any other frameworks that explains why. If you ask me, that’s another major area of concern and reasoning that UML cannot reach either. That begs the question, what is it that UML can do exactly?
So maybe, just maybe, it is time for the software industry to solve this paradox, hence start from scratch, or… admit defeat and lay UML to REST. Interestingly, I actually think that theodolites maybe part of the solution… (not a joke).
Another decade of research on UML as it stands will yield no result either, and everyone reading this thread knows it. I know this is an inconvenient truth, 15+ years wasted and nothing really to show for (of course, that’s not true, any scientific mind knows that, a non result is as important as a result). We have to grow up as an industry and quit that terrible addiction to “beliefs”.
As far as UML is a tool for all software systems, yes, in effect that’s what I am saying. It is a generalist tool that cuts across languages. UML doesn’t even define the concept of an executable. You have to stereotype a UML component, say, into being an executable. It will sit within a process or memory space, which you model as a deployment node, with a stereotype of <> say. Indeed, when you compose application software, you can use the component representation with a stereotype of <> and have a dependency link to the final <>. This gives you some traceability. However, given how fast software changes, that may be classed as overkill. So there isn’t anything in software that UML can’t model. Indeed, when the standard for 1.3 first emerged, a working group moved to apply business Business Process Extensions. As it happens, being someone that does BPMN also, the only thing that is conceptually different between BPMN and UML is the trigger set and some ‘decisions/branching’ (for example, time and non-mutual exclusivity of tasks. Though again, UML stereotypes can be used for that too).
As you have correctly said, UML is a Unified modelling language that did indeed stem from a desire to unify three main modelling languages of Booch, OMT and Objectory. Unlike methods such as HOOD, it wasn’t tied to a language. These days, you see UML’s formal popularity most in the Java world, since it forms part of the certification program for Java Architects, but it isn’t exclusively in that arena at all.
Also, remember, strictly by the standard, UML is not a set of diagramming components. It uses suggested representations and it is that which has been accepted en mass.
I agree that UML adoption won’t change in another 10 to 15 years. However, again, this is both a psychological and sociological point as well. The IT industry is full of individuals who are incredibly resistant to the introduction of engineering discipline and rigour into the field. They have always hated more formal methods of any type, especially if involved writing documentation, preferring to just programme because it was fun. They never really cared about optimising structures, communicating ideas or reasoning and the vast majority were ecstatic when the agile manifesto arrived on the scene. This suddenly gave them a piece of proverbial paper to throw back at the management teams to say “This here is best practise! Get off our case with your documents and models”. Indeed, if we remember the aims of both XP and Scrum, they get productivity out of AVERAGE programmers and unfortunately, by the nature of the average, we are talking the one or to sigma from the mean and not the 98th percentile or above. However, for business, that makes sense. After all, if you can get productivity out of a 50%er why pay for a 98%er?
However, it hasn’t ended up like that. These days, certainly in the UK, you still have to pay through the nose for the 98%-er since the 50% mean has shifted down the scale as a result of bad discipline in agile environments. What the 50%-ers forget is that without the process as the framework, it is the people that run a process and this requires more discipline NOT less in agile circles. It has also been compounded by the introduction of frameworks and too many visual drag-and-drop styles of programming which allow people in the industry who don’t have the problem solving skills to work in IT. Additionally, a lot of the people who fail to do agile properly, are the same people who didn’t like the engineering rigour of older modelling languages and methods and are just as bad in the agile space. Statistically, it would be an interesting to see if UML adoption would still have stagnated if pop-agile didn’t exist. Even though they are completely independent things.
In short, even if you stuck something else in UML’s place, it also won’t get adopted for the same reasons UML didn’t. Unless it has a code front, it’s doomed.
EA
well, that’s exactly why I am a metaprogrammer … (http://www.ebpml.org/blog2/index.php/2013/05/17/what-if-barbara-liskov-had)
In the end you have to pick what you are doing: building, modeling or conceptualizing
Metaprogramming not only gives me 3 nice dimensions in which I can conceptualize, model and build, but it gives a clear articulation between all 3. Something that UML/MDA never quite understood, and probably never will.
Waking up an old thread… In the article, CoRE (as an alternative) was mentioned, but there are so many acronyms it could be. Google’s searches aren’t case sensitive, so I’ve not made any progress. Can someone clarify what is (was) CoRE? Thanks!
Sorry but where is this CoRE mentioned? Just did a quick search and I only find mentions to the word “core” not really CoRE as an acronym
Section V C (2nd paragraph):
> Another argued: “There are a number of advantages of CoRE that are not available using UML. The key difficulties are the inability to assess cross-system performance prior to the detail design stage and the ability of domain experts to access information from UML models. Failure to assess system performance early in the design process during the system architecture definition phase leads to increased rework costs.”
I should be precise – it’s the ICSE paper: https://ieeexplore.ieee.org/abstract/document/6606618
I see and I confess I don’t know anything about it either. I’ll ask around