MiSE 2009

 

I’m attending the MiSE (Modeling IN Software Engineering) Workshop 2009, part of the ICSE’09 conference.

IN this FIRST DAY we had four FULL paper presentations (ALL papers ARE available IN the IEEE Digital library but if you don’t have access and want to read a paper I’m sure the authors will gladly send you a copy):

Toward Engineered Architecture Evolution by Chaki, Diaz-Pace, Garlan, Gurfinkel, Ozkaya proposes an approach TO engineer the process OF architecture evolution. Given the final architecture we want TO target, they split the evolution process IN a sequence OF steps WHERE each step applies a transformation operator TO evolve the CURRENT architecture. During the paper discussion, the main question was whether we could really assume that we know the target architecture WHEN we start evolving the CURRENT one. IN fact, TO me, the problem OF selecting the RIGHT architecture FOR a given system IS the biggest challenge. I’d like to see an expert system that, given a prioritized list of non-functional requirements (cost, flexibility, security,…) could recommend me: 1 – an architectural style and 2 – a set of technologies/products to implement that style (many of the performance characteristics largely depend more on the specific software components used in the architecture that on the properties of the style itself).

In Relationship-Based Change Propagation: A Case Study by Cabot, Chechik, Diskin, Easterbrook, Lai, Nejati
we discuss how to relate models at different abstraction levels by explicitly defining relationships among them. The possible kinds of relationships are pre-defined at the meta-model level for each pair of model types. Relationships can then be used to facilitate change propagation (i.e. given a change on a higher-level abstraction model, what are the changes we have to perform on the lower-level ones?). Our preliminary algorithm for change propagation informs the designer about the parts of the down-stream model that need not to be changed and points to the parts that have to be manually completed to correctly evolve it (kind of “fill in the gaps” process). This is still a preliminary work (as many attendees kindly pointed out) but we are working on generalizing the results.

Raising the Level of Abstraction in the Development of GMF-based Graphical Model Editors by Kolovos, Rose, Paige, Pollack provides an alternative way of defining a graphical modeling framework for our domain-specific languages, specially useful for non-GMF experts. Given a textual description of the meta-model plus some simple annotations, EuGENia generates all required GMF-models. The paper discussion was focus on the typical usability vs expressivity trade-off. More expressive annotations allow to support more complex frameworks but as a trade-off we lose the simplicity of the approach. It was suggested that EuGENia can be used as a first step. Then designers requiring more advanced features could extend the generated GMF models (exactly as they would do in the “normal” process).

Tailoring a Model-Driven Quality-of-Service DSL for Various Stakeholders by Oberortner, Zdun, Dustdar . This papers deals with the problem of how to involve different kinds of users/stakeholders (specially non-technical ones) in the definition, validation and use of domain specific languages. The solution? Divide the DSL into multiple (related) sub-languages at different abstraction levels and tailor each sub-language to a specific kind of stakeholder.

The day ended with the presentation of two posters on inconsistency resolution and on V&V approaches for UML.

In the second day of the MiSE (Modeling in Software Engineering) Workshop 2009 we had seven paper presentations, a presentation of the ReMoDD project (Repository for Model Driven Development, I talked about it in a previous post ) and a discussion session.

My summary/comments on the workshop events of the day are:

Formal specification of system functions by Spanfelner, Leuxner, Sitou presents the compositional Marlin language to specify system functions as services (where a service is a partial relation between system inputs and outputs). Their algebraic approach allows to verify several correctness properties on the services composition.

However, as always, when I see a presentation involving formal methods the two biggest challenges that come to my mind are: the usability of the approach for the end-users and the kind of feedback the method provides. These two aspects are key to convince users of the usefulness of the approach (if somebody is interested in my thoughts on applying verification techniques in practice you can take a look at UML/OCL Verification in practice paper)

Finding Inconsistency for UML-Based Composition at Program Level by Chavez, Chen checks that a Java implementation of a UML class diagram including composition relationships is correct. In particular, they check that the destroy operation in the Java classes destroys all input links to the object according to the composition relationships in which the object participates. To do so, they use a SAT-solver to generate possible scenarios and then instrument the Java bytecode to verify the results of executing the destroy methods on those scenarios. Although we agreed that their solution is overly complex for basic verifications on “structural” UML constraints (that can be solved using a more lightweight approach as this one , and sorry for a self-reference again!), we also agreed in that it offers a promising approach for the verification of more challenging correctness properties in the future. I specially like that they are able to use real Java code as input of the verification process.

Model-level Simulation for COLA by Hermannsdoerfer, Haberl, Baumgarten presents a model-level simulator for the COLA language (for the design of distributed and safety-critical real systems). The input data can be manually entered, test generated or (even better) come from a running version of the system which provides a more real simulation scenario. In the paper discussion, we mentioned that simulators are not a perfect solution (expert knowledge of the domain is needed to get meaningful results) but, for sure, better than no simulation at all!

Summary of the papers in the second workshop session:

Tackling High Variability in Video Surveillance Systems through a Model Transformation Approach by Acher, Lahire, Moisan, Rigault
deals with the construction of systems with multiple variability factors. I like their proposal of dealing with variability in two different levels: at the task level (requirements) and at the software components level (implementation). Each level is represented as a feature model. The idea is that first stakeholders use the first model to state the requirements of the desired surveillance system. Then, designers can use these requirements to assemble a set of software components that provide the required services.

Model Transformation of Dependability-Focused Requirements Models by Mustafiz, Kienzle, Vangheluwe presents a manual transformation from use cases to activity diagrams, with an emphasis on the dependability (safety, reliability, availability,…) aspects of the system. Some extensions to UML are proposed to deal with these issues. Not clear if the approach can be automated (I’m always skeptic about automatic anything involving use cases textual descriptions unless these descriptions follow a very strict SET OF templates).

ON the Use OF Models during Software Execution by Bencomo IS an easy-TO-READ, state-OF-the-art description OF the “models at runtime” field. The basic idea IS that self-adaptive systems (i.e. systems that need TO survive, without human intervention, TO changes IN their execution environment) need an internal model representation OF themselves TO reason about AND adapt TO EXTERNAL changes. These runtime models ARE different FROM the typical design models AND thus require specific techniques, SOME OF them discussed IN this paper.

Non-Functional Requirements Analysis Modeling FOR Software Product Lines by Nguyen discusses the problematic OF mixing variability (IN Product Software Lines) WITH non-functional requirements. The basic idea IS TO use non-functional requirements TO SELECT the features IN the product-line that better satisfy the stakeholder’s expectations.

After the paper presentations, Robert France presented the ReMoDD (Repository for MDD) project. I already wrote about this project but this time Robert seems to have good news. They may get the funding needed to (finally!) create the online repository. Now it is your opportunity to give us feedback about the idea (your requirements for the repository, ideas of how to convince companies to share their models,…).

Finally, we had a wrap-up discussion session. To be honest, the discussions was lively but the topics were more or less the typical ones (modeling as an engineering or as an art?) and the organizers already took notes so check the workshop web page for the discussion summary. The most important thing of the discussion? We will have a MiSE’10 workshop! (AND probably WITH SOME kind OF modeling challenge WHERE everybody will be able TO showcase the benefits OF USING their algorithms/techniques/tools ON a given CASE study).

Want to build better software faster?

Want to build better software faster?

Read about the latest trends on software modeling and low-code development

You have Successfully Subscribed!

Pin It on Pinterest

Share This