Human actions over artificial intelligence for GDPR
Although AI will be invaluable for business when it comes to the rapid reporting of data breaches, it also poses some challenges. For one, is the issue of complying to an individual’s right to explanation. The problem with machine learning models is that they are kind of a “black box” – no one really knows what answer they are going to come out with and the exact reasoning behind it.
Let’s look at a simple example: A bank uses a machine learning system to determine whether an individual is creditworthy to receive a loan. Based on data from previous borrowers, the system learns how to predict new applicants’ prospects for a loan. Let’s say in this instance, someone is declined a loan. AI reasoning from this decision lies within a complex web of millions of steps of data processing, all of which are difficult to trace back and provide an answer as to why a customer’s loan application was denied. This creates a particularly perplexing issue when the customer doesn’t know how to fix it, because he doesn’t know where the problem lies in the first place.
AI’s apparent unpredictability, deep-rooted in its complex mathematical foundations, causes problems when it comes to adhering to GDPR.
Unless companies processing an individual’s data fully understand the reasoning behind AI decision-making, it is difficult to adhere to the rule of “right to explanation.” Not being able to explain their decision not only risks non-compliance, but also frustrates customers who are left confused by the process.
GDPR also gives citizens the right to human judgement in the event of a contested result. Of course not all people will contest their results, but complying with this element in GDPR negates the need for AI and its whole ethos.
When it comes to GDPR compliance, the transparency and capabilities of AI and other machine learning algorithms are a double-edged sword. On the one hand, AI provides rapid detection of data intrusions and removes human error. But on the other hand, there are still issues around right to explanation and the obscurity of understanding AI decisions.
AI processes need to become transparent for companies to become compliant under GDPR - an issue which machine learning scientists are already looking into so that AI is less of a “black box”. Furthermore, the more resources used to uncloak machine learning models means fewer resources dedicated to making these models more successful (Juraj Jánošík tells in this article) – the latter being far more imperative when it comes to protecting data, particularly when opponents could be taking advantage by using more thoroughly understood machine learning technology.
Although AI has the potential to be a brilliant solution to tackling data security issues, it also only takes into consideration the data it is fed. This means that machine learning will not magically comply with GDPR unless it is clearly programmed to.
So, will AI help companies adapt to the inevitable change to data regulations?
There are certainly some advantages to using AI, however companies should approach it with caution. Regulatory issues around data collection and use mean that companies will find themselves treading a fine line when it comes to privacy concerns and will need to be mindful of this issue when developing their data collection strategies.
Companies will need to ensure that there are efficient procedures in place for machine learning to take charge of data, and handle instances of malicious intent, so that they can safely and confidently put the responsibility of data privacy in the hands of a machine.
[Article] The difference between Artificial Intelligence and Machine Learning
[Article] What is GDPR? Everything you need to know before the 2018 deadline