This great post by Greg Wilson in the software carpentry site proposes a new metric for language designers: the Robustness of a language.
“I’d therefore like to throw out a challenge to programming language designers. Forget about parallelism or the esoteric corner cases of various type systems; instead, focus on robustness. How forgiving is your language? How well do programs written in it work when people make minor mistakes? Or to switch to industrial engineering terminology, what are your language’s tolerances? “
I think this robustness metric is also very relevant for modeling languages (quite obvious for textual languages like OCL but also relevant for graphical notations). Note that this concept goes beyond the idea of allowing models that are not completely well-formed (i.e. models that do not satisfy all metamodel constraints maybe because of a mistake but maybe just because at that point of the development process the designer is not interested in specifying a complete and precise model).
FNR Pearl Chair. Head of the Software Engineering RDI Unit at LIST. Affiliate Professor at University of Luxembourg. More about me.
1. not sure: is the mistake above in the source code or is it actually in the data (hard coded) – anyway I understand the point he want’s to make – very interesting!
2. concerning the metric: I would prefere the variants that give me an error to those that simply do sth else.
|=
PS
agree – very interesting for modeling, too … *chewing*
The mistake is in the source code written by the student.
meant: if the “/bin/bash” were given by an input parameter of the script instead of hard coding it, would this make any difference?
|=
I see. I thing it was hard-coded. As input it would have been even more difficult to spot I guess.