GDPR Impact on BPMS
This post will be looking at the impact of the GDPR regulation on BPMS and in particular Decision Management or “Business Rules” which is functionality that often comes bundled with BPMS solutions.
In general, any business which is aligned to the earlier Data Protection Act (1995) shouldn’t have too much to worry about. In terms of automated business processes, there shouldn’t be any major change (except for automated decision making see later). The key to thinking about GDPR is all about the D in GDPR, it’s Data.
The area where businesses are having to review their systems is where they obtain personal data and whether they have obtained consent correctly. This means that businesses are having to review their privacy policies and consent management so that people are aware of what their data is going to be used for and they can give their consent to the organisation to process the data for those purposes.
Aside from work with legal teams for policy and consent wordings on the UX layer (web-form, call-centre script etc.) there are changes to the data model to manage consent for each individual and also again on the UX to capture this. So as long as the organisation has obtained personal data correctly then the processes don’t need to change.
On the plus side, due to the nature of automated business processes there can be no doubt as to what was presented to the individual when gaining consent as each interaction through a BPMS creates its own record and so enforces an auditability out of the box.
Decision Management or Business Rules
There is one caveat to this which is the subject of this post and that is the use of decision management services or business rules tasks. I’m a big fan of Business Rules, this is where you take the business logic out of your applications and keep them centrally in one location where they can be accessed from throughout the business through API calls. This means that the logic that is applied to your customers is the same irrespective of the channel or application being used. The other big advantage is that the business logic is there to be accessed and managed by the business, it doesn’t require a software change with the associated time and expense for implementation.
However, when talking about Decision Management consideration is required for one of the principal rights that GDPR introduces and that of “Automated individual decision-making including profiling” (Art. 22). This is to safeguard individuals against the risk of having a potentially damaging decision taken without human intervention, for example having a job application processed automatically or an application for a credit card.
And here is an inherent characteristic of Decision Management Systems in that it is discriminatory. Not in the charged sense of the word as unjust/prejudicial treatment of a group of people but in the sense that it works by taking data and applying a rule to the data and arriving at a decision, so that in some cases the decision is favourable and in others non-favourable.
Care must then be taken with the rules that are implemented to ensure that systems are not discriminatory (in the bad sense of the word) and that the logic can be explained when asked to do so. So whilst an insurance company obviously can’t use GDPR’s sensitive categories e.g. someone’s ethnicity as a business rule in calculating the premium for a car, they also have to be careful if they are not using ethnicity involuntarily. A business rule system which calculated the premium by using post code could be discriminatory against a minority group that live in a particular post code and get charged higher premiums.
This is a difficult subject because data isn’t ‘neutral’, it’s descriptive and consequently ripe for inference. So whilst a post code that has elevated motor car premiums can reveal a particular minority group is prevalent in an area, it is also the case that the post code has hard data about vehicle theft associated with it from the National Crime Database. Furthermore, with big enough data sets, patterns can be revealed across different sources which make it impossible to not arrive at sensitive information about individuals.
Thus, you must ensure that the appropriate safeguards are in place (not just for the individual but for the organisation). The safeguard must “provide meaningful information about the logic involved, as well as the significance and the envisaged consequences”. So, if you are using vehicle theft data by post code to calculate the insurance premium for a car, then you must be able to explain the logic for the premium amounts and also what happens when the figures for vehicle theft change across a post code (gentrification etc. which should reveal a changed premium).
In conclusion, as long as you are able to clearly show that processing of the data is fair and transparent, that it has the correct controls in place and that no damaging decision has been made without human intervention then there is still a place for automated decision-making. There is now more emphasis on the ethical dimension to be considered before releasing any software which automates a decision. The advantage in this sense of Decision Management Services is that decisions can be changed much more quickly than conventional software.
For further reading on profiling and the impact of predictive algorithms see Seth Flaxman’s paper on the Cornell University Library site