I was wondering if any of you knows/uses any tool to automatically grade (UML) models created by students. I’m fully aware this far from easy (which probably explains why I don’t know of any such tool that is widely used, at least not among the people I know that teaches MDE). Clearly, a direct approach aiming to compare and match the model submitted by the student with the default solution model created by the teacher wouldn´t work. This would imply solving at least two big problems:
- differences in the names of the model elements (e.g. the concept “car” in the solution model could be called “vehicle” in the student’s one) and
- detecting semantic equivalence of models with a different structure (e.g. alternative ways to define a taxonomy or the use of association classes vs the equivalent model without them). We could automatically apply some predefined refactorings to both the instructor´s solution and the students’ models to try to get a normalized model easier to compare but still this won´t solve the problem in the general case.
If the models were executable models then one option would be to mark them same way we do with programs (and for that, many tools do exist): using test cases as input and checking the program output is the expected one. Nevertheless, this is not going to be typically the case.
Another option (a more feasible one) would be to build a tool that instead of trying to find a perfect match between the models just aims at checking (structurally) if the student model includes some key elements that the teacher believes are key to have a correct model. We would still have the same problems mentioned before but at least simplified.
Anyway, as I said above, I´d be interested in knowing more about you mark the modeling assignments you give to the students (feel free to also share how you do it even if you’re not using a tool to assist you with that).