The secret life of rules in Software Engineering

Tweet about this on TwitterShare on FacebookBuffer this pageShare on RedditShare on LinkedInShare on Google+Email this to someone

This was the title of an invited talk I gave at the RuleML+RR 2017 conference. The goal of the talk was to explain to that community how we understand, specify, manage and translate rules as part of our software engineering processes.

The slides of the talk itself are available below. But a warning first, the usual readers of this blog won’t find much new content there. Again, the goal was to share our views with the RuleML community.

Still, I think it is worth for you to keep reading just to make up your mind regarding the three key aspects I tried to highlight in the talk:

Business rules are not first-class citizens in software engineering

We have the techniques to specify rules (e.g. with OCL), to verify and optimize them and to generate Java / SQL code from them with a few (mostly proof of concept) OCL tools.

Still, OCL is not really adopted in practice and OCL tools do not offer enough attractive features (e.g. in terms of good code-generation support) to encourage people to formalize their business rules.

This makes rule implementation a very adhoc process left to the programmers, with the dangers of incomplete rule implementation, use of inefficient checking strategies or even inconsistencies between the implemented rule and the desired one.

Business rules WILL be first-class citizens in software engineering

Then I argued that this situation is not sustainable. Mainly because the current data-ism we are living in. There is data everywhere and data that the software needs to consume even it did not create itself. Moreover, this data is typically stored in NoSQL backends, which means that software systems are now forced to manipulate data they don’t know, without an explicit schema to rely on for that manipulation and not even being sure of the quality of such data.

The only way out us is to put in place new mechanisms like schema discovery processes (remember: big data is not schemaless, if anything, it’s less schema) that infer explicit rule representations. These rule artefacts should be the basis for any data processing operation (via a dedicated rule engine, due to the complexity of this new scenario both in terms of size and complexity of the data) and be considered, at least, as important as any other component of the software.

As an example, I talked about our Open Data for All project and mentioned the opportunities cognification can bring to the table, also for the management of rules.

We need to join forces

My last point is to beg them to join forces with us in this quest in making rules first-class elements in software engineering. I think we can learn a lot from the techniques they have developed so far. It won’t be easy (we use different terminology, different languages, rely on alternative platforms…) but it doesn’t make sense that there is almost zero knowledge exchange between us. So far, we’ve been living in parallel universes. It’s time to change this!!

Slides

Tweet about this on TwitterShare on FacebookBuffer this pageShare on RedditShare on LinkedInShare on Google+Email this to someone
Comments
  1. Stephane Vaucher
    • Jordi Cabot

Reply

Your email address will not be published. Required fields are marked *