This year we had a new edition of the International Workshop in OCL and Textual Modeling, co-located with the MoDELS’16 conference . Co-organizers this year were Adolfo Sánchez-Barbudo Herrera, Achim D. Brucker and myself. This was the 16th edition of the workshop which makes it (most likely) the longest running workshop in the history of the Models conference. As usual, we were interested in any aspects involved in the definition, processing and analysis of OCL and other textual languages including synergies and integration with other (graphical or textual) modeling languages.
The list of accepted papers for 2016 is now online (with the slides of all presentations). The full proceedings are also now online and can be freely accessed here.
A novelty this year was the option to just show up and talk about any aspect around the workshop topics for 5 min. No need to submit any paper. In fact, you didn’t even need to present a scientific contribution. You could just present an open problem, show a tool or raise an interesting issue for the OCL community. The topics covered in this special session were:
- HOL-OCL: A formal methods environment for OCL
- The importance of opposites for navigability in OCL expressions
- A call to arms for the collaborative definition of OCL benchmarks (more on this one here)
- Making OCL operations more deterministic and
- Translating OCL to NoSQL languages (more on this one here)
We collected these lightning contributions in a collective paper for future reference where you can find a short description on each topic (and see who is behind them)
For an overview of the previous editions of the workshop keep reading!
OCL Workshop 2014
Read and judge by yourself the papers presented at the workshop.
OCL Workshop 2013
Quick note to let you know that the program and all pdfs of accepted papers are now available here.
OCL Workshop 2012
Some notes/summaries of the papers presented at the OCL Workshop 2012 (unless explicitly stated otherwise the notes refer to (my understanding of) the the presenter´s opinions, not mine, I can´t do both at the same time!)
Tool Supported OCL Refactoring Catalogue (Preproceeding PDF)
By: Jan Reimann, Claas Wilke, Birgit Demuth, Michael Muck, and Uwe Aßmann
Semantic preserving transformations for OCL. Goal: to improve reusability, readability,… OCL refactoring is one of the most requested features in the survey about OCL IDEs (by Joanna Chimiak-Opoka et al). 4 categories of refactorings: renamings, removals/materialisations, extractions/inlinings and separations/merges. Full catalogue at dresden-ocl.org/refactoring. 28 refactorings included. Refactorings implemented as part of the Dresden OCL toolkit. Process to apply the refactorings same as when you refactor code.
An extensible OCL Virtual Machine and Code Generator (Preproceeding PDF)
By: Ed Willink
In 2010 Wilke/Demuth shown that the most OCL constraints in the UML 2.4.1 metamodel definition are wrong. In UML 2.5 is a major simplification: less OCL missing and 0% is syntactically/semantically wrong. Claims that researchers cannot write “easy” papers criticizing UML / OCL standard, we will need to work harder. Eclipse OCL execution performance is not bad but it is interpreted: unsuitable for heavy usage. Code-generation could be better. He shows a simple example where a naïve code generation approach is four times faster. If the code generator “just” translates OCL to Java, then we have an improvement of a factor of 100. But of course, this generation has a number of challenges. Having a transformation that is semantically unsound and fast is easy, the difficulty is having a sound and efficient transformation (problems with data types, message dispatch, …). As a side note, the same OCL virtual machine extended with imperative constructs could be used to efficiently implement QVT.
Featherweight OCL: A study for the consistent semantics of OCL 2.3 in HOL (Preproceeding PDF)
By: Achim D. Brucker and Burkhart Wolff
The semantics of OCL 2.3 is spread over several places (ch.7 informal description, ch.10 semantics described using UML, ch.11 standard library, app. A formal semantics not normative) and all should be aligned with all the other UML (sub)standards. The work focuses on the problems of the “undefidness” semantics of OCL (first we had only the invalid value to signal exceptional situations, later the null value was introduced). Featherweight OCL formalizes the core OCL with a special emphasis on the formalization (and check of the consistency) of the integration of the null value (and its combination with the invalid one) in the OCL language. Right now, the way the standard is defined we had some funny things like “not (not X) = X” does not hold for all values of X or similary you have that “null and null= invalid”. Formalization is done with Isabelle/HOL. Their work could be used to update the Appendix A in the OCL standard.
On the Use of an Internal DSL for Enriching EMF Models (Preproceeding PDF)
By: Filip Křikava and Philippe Collet
Context of his work: applying MDE into self-adaptative software systems. Due to the limit expressiveness of structural constraints the started enriching EMF with OCL. When started using OCL, they detect some shortcomings and since their work was based on Scala they started enriching EMF with Scala. About the OCL shortcomings: hard to write, side-effect free expressions, porr support for user feedback, no support for warning/critiques, limited flexibility in context definition, no support for repairing inconsistencies,… (all them can be fixed but there is no one single solution that solves all). All these problems are linked to scalability (not in terms of performance but in terms of number and size of the expressions). Their solution is to use Scala to define an internal OCL-like DSL. As drawbacks of their approach, they mention the loss of formal reasoning and anaysis, difficulty to ensure side-effect free expressions and they are still missing constructs to express postconditions.
Library for Model Querying – lQuery (Preproceeding PDF)
By: Renārs Liepiņš
Defining model queries in a general-purpose programming language. Their IQuery extension for C++ allows to apply a selector on a collection of objects. There are number of primitive selectors predefined (e.g. to select objects of a given type, with a given attribute value…). These primitive selectors can be combined (with union of the results, chain, intersection, etc). These combinations can also be stored as selector patterns and reused afterwards. No query optimization available. The choice of the patterns was based on their own needs.
Ontology Driven Design of EMF Metamodels and Well-formedness Constraints (Preproceeding PDF)
By: Benedek Izsó, Zoltán Szatmári, Gábor Bergmann, Ákos Horváth, István Ráth and Dániel Varró
From textual requirements (in controlled english or protege) to DSML editors only allowing DSM models satisfiying the constraints of the requirements. Process: from requirements to ontologies and from ontologies to DSL editors. This inlcudes a OWL2 to Ecore and a OWL2 to IQPL (IncQuery pattern language, i.e. constraints expressed as graph patterns) mapping. Limited subset of SWRL to express rules can also be translated to IQPL. Mismatch due to the difference in the open world /closed world assumption exists. Transformation is implemented in the VIATRA2 framework. An Eclipse plug-in can be used to execute the transformation.
Modeling and Executing ConcurTaskTrees using a UML-and SOIL-based Metamodel (Preproceeding PDF)
By: Jens Brüning, Martin Kunert and Birger Lantow
ConcurTaskTrees is a hierarchy-oriented workflow-based modeling language. They have created a metamodel for it. CTT models are not sound by construction, some of the problems may be detected/prevented using OCL constraints. Modeling CTT with USE can provide some some soundness analysis. Operational semantics are implemented with SOIL. A USE plug-in to let users execute CTT models is also available. Interesting question for future work: can soundness properties for workflow nets defined by van der Aalst be expressed with OCL?
Automatic Generation of Test Models and Properties from UML Models with OCL Constraints(Preproceeding PDF)
By: Miguel A. Francisco and Laura M. Castro
Black-box testing approach with the goal of write less and test more. Approach is a combination of property-based testing (declarative statements about properties that the system must hold from which tests are generated) and model-based testing. they use Dresden OCL to parse the UML model + pre/postconditions in OCL and generate a set of abstract properties. QuickCheck use these properties to generate the tests for the system implementing the model.
Transformation rules from UML4MBT meta-model to SMT meta-model for model animation(Preproceeding PDF)
By: Jérôme Cantenot, Fabrice Ambert and Fabrice Bouquet
Goal: generation of tests using a SMT solver. UML4MBT is a subset of UML specific for model-based testing (only 3 digrams, inheritance not allowed). OCL4MBT language same for OCL (strings not allowed, real and integer variables bounded). Systems expressed with these sublanguages are transformed into a SMT meta-model (based on the standard SMT-libs representation of SMT problems). Then, this infformation can be used as input of a SMT solver to animate the model.
Model-based formal specification of a DSL library for a qualified code generator (short paper)(Preproceeding PDF)
By: Arnaud Dieumegard, Andres Toom and Marc Pantel
Domain of critical embedded systems. Most people in the domain uses simulink/scicos. OSS GeneAuto code generator takes simulink models and outputs C and Ada code. There´s a need of a formal specification of the code generator input language semantics, preferably suited for use by industry system and SW engineers. Main challenge is the semantics of the “block”s primitive due to the huge variability of most of them. E.g the “simple” sum block has 19 pages in the original documentation explaining the meaning of parameters.
The Secret Life of OCL Constraints (short paper) (Preproceeding PDF)
By: Oliver Hofrichter, Lars Hamann and Martin Gogolla
Identification of manifestation of OCL constraints beyond the expected (formal) use: informal descriptions of constraints, feedback (help to understand the errors in the model), implicit assumptions for model transformation (invariants are implicitly assumed to be kept by the transformations). Context project is an initiative of the german government to standardize IT for e-government.
Experiences using OCL for Business Rules on Financial Messaging (short paper) (Preproceeding PDF)
By: David Garry
They have built their own OCL implementation to execute OCL rules over XML data (their implementation generates Java from OCL). Problems they have encountered in the process: – Identifying exact error locations to proivde appropriate feedback, – supporting additional data type and functions and 3 – access to check data against external sources.
We all agreed that two key trends in this workshop edition were:
- Many people use variants of the standard OCL. These variants usually involve (at the same time) restricting the language so that only a core OCL is used and extending it with new types/functions needed for the domain they are using for. Clearly, there´s a need to add a modularization mechanism to OCL that allow people to import (domain-specific) libraries on top a core OCL language. The new version of OCL (to appear in 2013) will advance in this direction
- Several types of formalisms are used to analyze/reason on OCL expressions (CSPs, SAT, SMT, HOL,…). The choice of a formalisms seems to be closely related to the available expertise on the research group. A comparison between the trade-offs of the different formalisms when reasoning on OCL expressions could be very useful. Martin Gogolla and Fabian Büttner agreed to take the lead on creating a working group on this, involving representants of the different OCL-to-X tools available.
OCL Workshop 2011
proceedings of the International Workshop on OCL and Textual Modelling 2011 have now been published in the online and open access journalElectronic Communications of the EASST
OCL Workshop 2010
Papers freely available: EASST proceedings (volume 36)
Around 30 participants introduce themselves.
09.30-10.30 Session: OCL Integration and Tooling
- 09.30-09.50 Re-engineering Eclipse MDT/OCL for Xtext. Edward Willink.
Eclipse OCL evolution to use Xtext (before it was using IMP project but that was an incubation project and could not be used to release MDT/OCL). Contrast old LPG+IMP with new Xtext approach. OCL is complex so there is a lot of difference between the concrete syntax tree and the abstract syntax graph. Xtext helped a lot with that since it generates lexer, LL parser, editor,
The paper contains a comparison between both implementations (Xtext and LPG+Java). For instance, Xtext implementation is at least 5 fives smaller regarding the source code but Xtest is 10 times larger regarding the classes sizes and about 11 times slower.
- 09.50-10.10 Integrating OCL and Textual Modelling Languages. Florian Heidenreich, Jendrik Johannes, Mirko Seifert, Michael Thiele, Christian Wende and Claas Wilke.
Appropriate front end tooling to apply OCL to different languages is missing. They target integration of OCL into editors for textual modeling languages. Two possible generic strategies: external OCL definitions (OCL expressions provided in a separate file) and embedded definitions (ocl syntax mixed with the own textual modeling syntax). There are use cases for both alternatives.
Proposed integration process consists in five steps: metamodel integration, concrete syntax integration, metamodel adaptation, static semantics integration and dynamic semantics integration. The external OCL strategy only requires two of these steps. All of them are challenging. For instance, concrete syntax integration difficult because OCL and the textual language may contain overlapping reserved words (possibility of having both grammars at the same time and switching from one to the other depending on the presence of a given token).
- 10.10-10.30 A Feature Model for an IDE4OCL. Joanna Chimiak-Opoka and Birgit Demuth.
(Nice) Using the business motivation model (OMG) to motivate and explain the paper.
Vision: OCL be the supplementary language in practice. Goal: provide an IDE4OCL framework to improve usability of OCL tools. Mission: develop IDE4OCL with partners. Presented what should be the OCL tool landscape with IDE4OCL focused on the specification, evaluation and verification of OCL expressions and other tools (testing tools, modeling tools) built on top of it. The specific features for IDE4OCL have been decided after analyzing the answers to a survey they started last year (see the paper and the survey from last year). The paper presents a feature model describing all the selected features. Further input is asked to help them prioritize these features and decide which ones are mandatory.
11.00-12.30 Session: OCL Foundations and Applications
- 11.00-11.20 A Specification-based Test Case Generation Method for UML/OCL. Achim D. Brucker, Matthias P. Krieger, Delphine Longuet and Burkhart Wolff.
A way to find early bugs. Also useful for validation of models against legacy problems (test cases help understanding if the models corresponds to the system). Of course, suffers from state-explosion problems as all these kinds of techniques. First OCL is transformed into Standard Logic (details in the paper!). The result is unfold and recomputed to DNF form. Then, their test-generator (HOL-TestGen) can generate the tests (expressed in HOL). HOL-TestGen is integreated in Isabelle
- 11.20-11.40 MySQL-OCL: A Stored Procedure-Based MySQL Code Generator for OCL. Marina Egea, Carolina Dania and Manuel Clavel.
Movivation: need to evaluate OCL expressions on really large scenarios and need to generate code from UML/OCL models where the persistent layer is a (MySQL) relational database. For both scenarios it would be useful to have a OCL to SQL (stored procedures) transformation. Its not the first OCL to SQL approach but has some novelty features. MySQL4OCL is defined recursively on the structure of OCL expressions. OCL iterator expressions are mapped to stored procedures and the final output of the evaluation (e.g. recording the instances that do not satisfy the constraint) is saved in a table. This table can be later queried to see the results of the evaluation.
- 11.40-12.00 Navigating Across Non-Navigable Ecore References via OCL. Martin Hanysz, Tobias Hoppe, Axel Uhl, Andreas Seibel, Holger Giese, Philipp Berger and Stephan Hildebrandt.
SAP has decided to migrate MOIN (proprietary modeling framework at SAP) to EMF. Several problems have to be faced during the migration. For instance, MOIN is implementation of CMOF which treats associations differently from EMF (in CMOF associations always have association ends, which implies that there is always the possibility to address the opposite of a reference; in EMF associations can be unidirectional, bidirectional ones are simulated). The solution implemented in the paper uses annotations to specify opposite role names in metamodels. The good point is that this does not break existing EMF-based toolsets.Adapted OCL interpreters can use this information when navigating through the association ends. Implementation of the interpreters are key to ensure scalability.
- 12.00-12.20 Towards a Conceptual Framework Supporting Model Compilability. Dan Chiorean and Vladiela Petrascu.
The importance of defining Well Formedness Rules that specify the static
semantics of modeling languages derives straight from the MDE objectives.
Models conforming to the WFRs are usually called well-formed models but,
the authors propose the concept of “compilable models” because there may
be other kinds of rules that a model has to comply with, such as:
methodological rules, metric rules, business rules and also because the
proposal stress the similarity among modeling and programming languages.
As today, models compilability is more a goal than a reality the authors
propose a framework meant to support correct WFRs specification. The
framework is based on detailed test driven specifications (similar to test driven programming), testing oriented formal specifications (to facilitate efficient error diagnosis in case of assertion failure) and on correct and efficient formal specifications. Due to differences between “classical” invariants used in BCR and WFR, the importance of choosing the apropriate context is emphasized. The proposals are exemplified by means of two well known case studies.
14.00-15.30 Session: Textual Modelling
- 14.00-14.20 Verified Visualisation of Textual Modelling Languages. Fintan Fairmichael and Joseph Kiniry.
When a language has textual and graphical aspects consistency between them is important. Research question: can we allow simultaneous editing both without worrying about consistency?
Uses BON as example modeling language. BON has both textual and graphical representation (one-to-one mapping between them). Their approach is to formalize the relationship between the two (they use PVS). Given this relationship they can determine if one corresponds to the other and provide one from the other. In the case of BON, this is a bijective relationship. (Incremental) model evolution can also be dealt with in a similar way (defining relationships between the diffs).
Not sure why this is not done relying on the abstract syntax representation of the model (e.g. defining equivalence between a textual and a graphical representation in terms of whether they generate the same metamodel representation?).
- 14.20-14.40 Support for Bidirectional Model-to-Text Transformations. Anthony Anjorin, Marius Lauder, Michael Schlereth and Andy Schürr.
The title is self-explanatory (especially if you add the using Triple Graph Grammars at the end). This problem can be solved using tools like TCS or XText but not if you want to use your own parser, metamodel, and editor (as requested by their industrial partners). Additional requirements:homogeneous, declarative and bidirectional. Their solution: extend TGG to support bidirectional model-to-text transformations. You can use an arbitrary parser as long as it generates a simple tree that they can use as input for the TGG.
- 14.40-15.00 An Overview of F-OML: An F-Logic Based Object Modeling Language. Mira Balaban and Michael Kifer.
Intended usage: extending UML (like OCL), reasoning, testing, meta-modeling. Features of the language: polymorphism, multi-level modeling and definition of model instantiations. F-OML is a semantic layer on top of PathLP (small language of guarded path expressions) formulating two main OO concepts: class and property plus several class constructors (e.g. to create a class of objects as intersection of two classes), property reification and transitive closure operations. F-OML has the expressive power of Logic programming and captures a part of second-order logic.
- 15.00-15.10 On the Need of User-defined Libraries in OCL. Thomas Baar.
A lot of libraries in Java. Easier to download and immediately use in your Java programs. Also, libraries are organized in bundles. Why cannot we do the same in OCL? Right now, OCL has just one library (the standard one). User-defined libraries in OCL would contribute to a more active community in OCL.
Showing an example of how to solve the same problem (isOrphan method in a graph node) in Java and OCL. In Java easy to solve by reusing a user-defined Java Graph library. In OCL such thing does not exist. Next, it is shown how to resolve the problem for OCL assuming that a library mechanism for OCL is possible. Looks nice but not free of challenges: no import statement for OCL, different import-statements in UML and no mechanisms for customization.
16:00-17:30 Discussion session
- 16.00-16.30 Invited Presentation: Evolution of the OCL OMG Specification. Mariano Belaunde.
Mariano (Chairman of the OCL finalization and revision task force) talks about the evolution of the OCL and of the process for evolving it.
In 2003 OCL becomes a stand-alone specification. In 2005 people from the QVT standard takes over the OCL standardization process (that was kind of orphan at the moment) and finalize OCL 2.0 (which was only partially aligned with UML 2.0). In 2009, the alignment was completed. In 2011, OCL 2.3 (or 2.4) to come.
About the maintenance process. Aim of the OMG: to be open (everybody can submit issues), fair (voting procedure but only for OMG members) and transparent (all changes to the specification tracked). This makes it very time-consuming.
Problems in the OCL spec maintenance:1- Not enough man power (companies interested in creating new standards not to maintain existing ones).2- Too formal (wrt to the tools available for validating/verifying this formality) and several partial formal representaions that are not always synchronized. Audience does not agree. Most important problem is that there are several partial formal representations that do not always coincide.Novelties in OCL 2.3: Enhancing the OCL library, OCL collections as plain objects, continuing UML2 alignment. Still many other future challenges (e.g. OCL reflection, support of stereotypes,..)
Collaboration from the attendees is welcome to help fixing all the problems with the specification (instead of just pointing to them ? ).
OCL Workshop 2009
The workshop took place in Denver as part of the MoDELS’09 conference. The summary of the workshop papers and discussions (at the end) is the following ( I apologise in advance for the weird uppercase use in the text, it’s due to an import problem of the text from another environment):
- Extending OCL WITH NULL-REFERENCES: Towards a Formal Semantics FOR OCL 2.1 by Brucker, Krieger and Wolff : Originally OCL had a single exception element: invalid (previously oclUndefined). This IS useful TO model exceptional situations (e.g. division by zero) but NOT appropriate to model absence of values. To fix this, the OMG introduced the NULL value. This is clearly a step in the right direction but the paper points SOME problems made during the introduction OF this new NULL VALUE AND proposes a consistent integration OF NULL INTO the semantics OF OCL 2.0 showing how strict AND non-strict operations, navigation expressions,
should behave IN presence OF invalid AND NULL VALUES
- ON Better Understanding OCL Collections -OR- An OCL Ordered SET IS NOT an OCL SET by Buettner, Gogolla, Hamann and Kuhlmann
: The paper highlights all problems of the current definition OF the OrderedSet TYPE AND FROM THEN, starts refactoring the hierarchy OF collection types IN OCL (i.e. Bag, SET, Sequence AND OrderedSet) rethinking the relationships BETWEEN them depending ON two main properties: 1 – whether they ARE InsertionOrderIndependence (Bag AND SET) AND/OR SingleElementOccurrence (SET AND ORderedSet). AS a solution, they propose TO specify OrderedSet AS a subclass OF Sequence (AND NOT AS a subclass OF SET). This may seem strange (shouldnt an orderedSet be a SET?) but the main problem here IS that the OrderedSet AS defined by the OMG does NOT really corresponds TO the mathematical concept OF OrderedSet, which forces this unnatural hierarchy.
- Checking unsatisfiability FOR OCL CONSTRAINTS by Clavel, Egea and Garcia de Dios :
The paper proposes a new method FOR checking if a SET OF CONSTRAINTS IS satisfiable. There ARE several solutions FOR this (self-advertisement: CHECK FOR instance our UMLtoCSP tool ) but none IS perfect (due TO the undecidable nature OF the FULL OCL LANGUAGE). They present an approach that it IS automatic AND unbounded WITH the but that does NOT support the FULL OCL expressiveness. The method works by translating the OCL expressions INTO FIRST-order logic AND USING state-OF-the-art solvers TO analize the generated logic predicates. ALL IN ALL, a nice addition TO the SET OF methods FOR checking satisfiability OF UML/OCL models. AS Ive said several times, the MORE we have the better. Depending ON the specific characteristics OF the problem AT hand, designers should be able TO choose the tool that offers the best trade-off OF features FOR that specific problem.
- OCL contracts FOR the verification OF model transformations by Cariou, Nicolas, Barbier and Djemam : the goal OF the paper IS TO be able TO verify that a couple OF models IS the valid result OF a (endogenous) transformation. This IS a nice idea, since this allows designers TO manually modify the models generated by the transformation but still ensure that the manually-modified transformed model IS consistent WITH the transformation definition. Their approach works by, given a transformation specified AS an OCL operation contract, extracting a SET OF CONSTRAINTS that must hold BETWEEN the two models after the transformation. This approach IS still IN a preliminary stage since the extraction process IS NOT yet automated (AND this IS possible, AT least FOR other transformation languages AS TGG OR QVT AS we’ve done here ).
- Requirements Analysis for an Integrated OCL Development Environment by Chimiak-Opoka, Demuth, Silingas and Rouquette The paper presents a list of requirements that an ideal IDE for the OCL language should have. They (as myself) believe that having a proper IDE is a mandatory requirement to widespread the use of of OCL specially among software development companies. See below the discussion section for more details on this
- Generation of Formal Model Metrics for MOF based Domain Specific Languages by Hein, Engelhardt, Ritter and Wagner : The paper addresses the automatic generation of metrics definitions (represented as instances of the OMG software metrics metamodel ) for DSLs with their Metrino tool. They have defined (a kind of) set of measurement patterns in OCL that when applied on a specific DSL generate a full list of metrics definitions. These metrics can, then, be executed on models instance of the DSL using an OCL evalauator. This is a nice work but I’m more interested in more advanced metrics. Metrics counting the number of classes, attributes, associations,… are not that useful. I’d be interested in metrics that tell me, for instance, how complex or readable is a model by aggregating several basic metrics and comparing their values with predefined thresholds empirically determined and validated.
- A MOP Based DSL for Testing Java Programs using OCL by Clark The DSL presented in this paper allows designers to write model-based tests in OCL. The novelty of the paper is that, until now, these model-based tests could not be used to test the implementation of the model (in Java for instance) because, quite often, the mapping process is not straightforward and this caused the OCL expressions to be inconsistent with the implemented model. This paper presents a method to overcome this problem and enables model-based testing of OCL expressions against different Java implementations of the same model.
- Declarative Models for Business Processes and UI Generation using OCL by Brüning and Wolff
The authors use OCL to express complex temporal relations in a business process model. Thanks to this, they are able to express flexible execution logic and UI control flows for workflows. These OCL-extended process models can be then animated using the USE tool .
- Specifying OCL Constraints on Process Instantiations by Killisperger, Stumptner, Peters and Stueckl : Software development processes at Siemens are modelled on the basis of a generic reference process that individual business units adapt (instantiate) according to their individual needs. OCL constraints defined by business modelers (good to know that business people can write OCL! even if it is a specific subset of the OCL language) ensure that this tailoring is made in a consistent way. This consistency analysis is made by translating the models and constraints into a Constraint Satisfaction Problem. The method also allows to define correction rules for each constraint.
- Wrap up and discussion : The discussion was centered around two aspects, the ideal features of an IDE for the OCL language and the relationship between the OCL workshop community and the OMG. Regarding the first aspect, we proposed our list of features and answer a survey on this topic (the survey is available here , feel free to propose your desired features). For the second aspect, we agreed that the community around the OCL workshop should try to influence the OMG standardization process regarding the OCL language, to make sure that the problems and improvements we detect and propose are taken into consideration when preparing new versions of the standard.