{"id":7850,"date":"2021-09-08T13:47:59","date_gmt":"2021-09-08T13:47:59","guid":{"rendered":"https:\/\/modeling-languages.com\/?p=7850"},"modified":"2021-09-08T13:47:59","modified_gmt":"2021-09-08T13:47:59","slug":"the-impact-of-introducing-a-modeling-tool-in-a-requirement-engineering-course","status":"publish","type":"post","link":"https:\/\/modeling-languages.com\/the-impact-of-introducing-a-modeling-tool-in-a-requirement-engineering-course\/","title":{"rendered":"The impact of introducing a modeling tool in a Requirement Engineering course"},"content":{"rendered":"

In numerous programming and software engineering courses, students are asked to program on paper, which has supporters and detractors<\/strong>. Supporters claim that, among its advantages, programming on paper allows students to focus on functionality, avoiding the distractions caused by syntax and without limiting their thinking to a specific programming language or paradigm. Detractors claim that this method lacks advanced capabilities provided by IDEs such as syntax check and auto-completion. More importantly, it does not give the opportunity to execute and test the code, which prevents students from discovering bugs.<\/p>\n

The benefits and disadvantages of programming on paper versus computer for general-purpose languages like Java and C with students of initial courses have been studied repeatedly. Nevertheless, to the best of our knowledge, no study has been performed targeting formal languages like OCL<\/strong>, which are taught in advanced courses.<\/p>\n

Here, we aim to present our experience after introducing a modeling tool for the specification of OCL constraints in a Requirements Engineering course<\/strong>.<\/p>\n\n

Introduction<\/h2>\n

In recent years, the professors of the Requirements Engineering course of the Computer Engineering Bachelor Degree offered at the Open University of Catalonia<\/a> (Universitat Oberta de Catalunya – UOC for short) have noticed that students showed some disappointment with the formative assessment. As part of this formative assessment, they were asked to define a series of restrictions using the declarative language OCL. No digital support was recommended or provided and the students used to do this exercise on paper. The complaints of some of the students, added to the feeling that the lack of a software tool could be affecting the development and results of this exercise, made us consider the introduction of a modeling tool. The idea behind this change is that the tool could allow the students to execute and test such OCL constraints and therefore we could be preventing a potential learning obstacle.<\/p>\n

There is no consensus on the use of paper in Programming and Software Engineering courses. On the one hand, many educators endorse the benefits that programming on paper has for Computer Engineering students or for students of any other higher education program. They claim that programming on paper allows students to focus on the logic of the program they are writing, avoids distractions caused by syntax errors, and does not limit the students’ thinking to a specific programming language, platform or paradigm<\/a>. These statements are also explicitly supported by many international companies. For example, during the hiring process, one of the several interviews that their interviewees must pass is the so-called whiteboard interview<\/em>. During these sessions, the potential employees need to solve a problem on a whiteboard using the (pseudo-)language they prefer. This way interviewers assess the knowledge, competences and skills of a potential hire.<\/p>\n

At universities, pencil-and-paper programming is frequently used not only as a means of learning but also when assessing the students’ knowledge in both formative and summative assessments [1, 2, 3].<\/p>\n

Despite its benefits and adoption in specific contexts, paper programming also has its detractors and some widely recognized drawbacks. For example, students often complain that they cannot execute, test, and debug the code they are writing. This method prevents students from verifying correctness of their code and hinders the detection of those errors that could be easily and early discovered with the simple execution of the program.<\/p>\n

Apart from the studies that address programming using general-purpose languages, to the best of our knowledge, there is no study that focuses on the use of paper as opposed to a tool when learning formal languages<\/strong> and\/or standard languages for the definition of rules (such as OCL). Since modeling languages such as UML\/OCL are extensively used in the academic environment, not only in our university but in many others<\/a>, we decided to document and publish our experience so that anyone in our situation can benefit from it<\/strong>.<\/p>\n

Our study shows that the use of tools for learning rule-definition languages such as OCL is perceived positively by both students and faculty members<\/strong>. Comparing to the results obtained in the fall of 2019—when no modeling tool was provided to our students—and the fall of 2020—semester in which we introduced a modeling tool and performed this study—, we have observed that the students’ grades have improved.<\/p>\n

Context<\/h2>\n

The experience described in this paper takes place in the Requirement Engineering course of the Computer Engineering Bachelor Degree offered at the Open University of Catalonia (UOC). Our Bachelor programmes have 240 ECTS credits and are planned to take four years of full-time study. This is an elective course that students can take during their third or fourth academic year. The course assumes a basic knowledge of software engineering and delves into the first stage of the software development life cycle. The contents of the course are organized into five modules: (1)<\/em> introduction to requirements engineering; (2)<\/em> requirements elicitation; (3)<\/em> requirements analysis and management; (4)<\/em> requirements documentation; and (5)<\/em> requirements validation and verification.\u00a0The OCL language<\/a> is introduced during the fourth module, as a method to formally document requirements.<\/p>\n

The progress of our students in this course is evaluated using a continuous assessment model. Concretely, the model of the course is composed of four Continuous Assessment Tests (CAT) scheduled throughout the semester, all of them formative. The course does not have any summative test. For each CAT, students are provided with detailed feedback consisting on their grade accompanied with individual comments. A few days after the feedback is given, we publish the solution to the CAT.<\/p>\n

All CAT activities are built on top of the same case study, which allows the students to have a complete and more realistic vision of all the phases of requirements engineering lifecycle. In particular: the first CAT focuses on requirement elicitation given the textual description of the case study; the second CAT addresses the analysis and management of requirements; during the thrid CAT, the students document the requirements in an agile way through use cases and user stories; and, finally, in the fourth CAT, the students document the requirements in a formal way (using UML\/OCL) and apply validation and verification techniques. The fourth CAT is the target of our experiment.<\/p>\n

The faculty involved in this course is formed by two assistant professors and three teaching assistants. The course has around 150 students enrolled each semester, which are assigned to virtual classrooms with about 70 students each. Each classroom is energized by one of the teaching assistants, who guides and accompanies the students during their learning and assesses them and returns the correspondent feedback for each activity.<\/p>\n

During the semester, all communication is carried out asynchronously and online, through the virtual classroom forums (where the messages are public to all students in the classroom). Less often, students communicate through email to ask questions directly and privately to the teaching assistants.<\/p>\n

Experiment description<\/h2>\n

We built an experiment to address the following research question: What is the impact in the student learning process of a modeling tool with support for defining and executing OCL constraints?<\/em><\/strong><\/p>\n

We propose to measure the learning process by relying on three indirect measurements, namely: the student’s self-assessment, the teaching assistants’ assessment and the student’s academic performance.<\/p>\n

Our hypothesis is that using the tool will have a positive impact in the understanding and learning of the OCL language, as the ability of experimenting during the learning process allows understanding the syntax and semantics of the language, detecting errors (i.e., enabling the test-and-error behavior); thus, promoting the definition of correct constraints.<\/p>\n

Our research context is a case of Action Research<\/strong>. We employed empirical research methods [4] to answer our research question. In particular, we have performed a mixed-methods study combining qualitative and quantitative research approaches<\/strong> to address our question from different perspectives with the goal to mitigate the weaknesses of the different empirical methods used.<\/p>\n

We defined a controlled experiment to study the cause-effect relationship between using the modeling tool (i.e., binary independent variable) and its consequences. We identified the following dependent variables: dedicated time to do the experiment, student’s perception on the usefulness of the modeling tool, student’s perception on the difficulty of use of the tool, and student’s academic performance.<\/p>\n

The experiment was done during the fourth CAT of the course, which is composed of three exercises. One of the exercises assesses the knowledge about OCL and weights 40% of the total mark of the CAT. This exercise presents a class diagram and includes four question tests which ask the student to define OCL constraints. To facilitate the analysis and comparison of the results, the structure of the CAT is similar to those used in CATs from previous semesters. Thus, the OCL constraints to define in the exercise evaluate the same OCL features (e.g., variables, operators, traversals, etc.).<\/p>\n

Prior to start the CAT, students were informed about the experiment and were given the opportunity to choose between using or not the modeling tool to solve the CAT’s exercises. The final decision to use the modeling tool is therefore up to the student. The modeling tool proposed was MagicDraw<\/a>, as it is one of the most popular UML modeling tools<\/a> with support for the definition and execution of OCL constraints. Besides, MagicDraw is used in the Software Engineering course, which is a prerequisite for Requirements Engineering course.<\/p>\n

Once started the CAT, students had four weeks to work and deliver their solution proposal. During this development time, students could ask questions either in the virtual classroom’s forum or contacting the teaching assistants of the course via email. After this period of time, we collected the experiment’s data from three sources, namely: (1)<\/em> students, via an online form; (2)<\/em> teaching assistants, via online structured interviews; and (3)<\/em> messages and grades, via the university virtual campus.<\/p>\n

More details about the data collected from each source as well as the detailed results of the analysis can be consulted in our paper [5] (preprint available here<\/a>). Next, we discuss these results.<\/p>\n

Discussion about the results of our empirical study<\/h2>\n