HOW AI CAN ENABLE ENTERPRISES TO IMPLEMENT GENERAL DATA PROTECTION REGULATION (GDPR) POLICIES
Add Your Heading Text Here
The GDPR General Data Protection Regulation (GDPR), which goes into effect May 25, 2018, requires all companies that collect data on citizens in EU countries to provide a “reasonable” level of protection for personal data. The ramifications for non-compliance are significant, with fines of up to 4% of a firm’s global revenues.
This European Union’s sweeping new data privacy law, is triggering a lot of sleepless nights for CIOs grappling with how to effectively comply with the new regulations and help their organizations avoid potentially hefty penalties.
Will AI be the only answer to the highly regulated GDPR to come?
The bar for GDPR compliance is set high. The regulation broadly interprets what constitutes personal data, covering everything from basic identity information to web data such as IP addresses and cookies, along with more personal artifacts including biometric data, sexual orientation, and even political opinions. The new regulation mandates, among other things, that personal data be erased if deemed unnecessary. Maintaining compliance over such a broad data set is all the more challenging when it is distributed among on-premises data centers, cloud offerings, and business partner systems.
The complexity of the problem has made GDPR a top data protection priority. A PwC survey found that 77% of U.S. organizations plan to spend $1 million or more to meet GDPR requirements. An Ovum report found that two-thirds of U.S. companies believe they will have to modify their global business strategies to accommodate new data privacy laws, and over half are expecting to face fines for non-compliance with the pending GDPR legislation.
This begs the question: Can AI help organizations meet the GDPR’s compliance deadline and avoid penalties? After all, AI is all about handling and deriving insights from vast amounts of data, and GDPR demands that organizations comb through their databases for rafts of personal information that falls under GDPR’s purview. The answer is not only in the affirmative, but there are several significant instances where AI solutions to regulation compliance and governance are already on the high.
For example, Informatica is utilizing advances in artificial intelligence (AI) to help their organizations improve visibility and control over geographically dispersed data. It will provide companies with a holistic, intelligent, and automated approach to governance, for the challenges posed by GDPR.
AI interventions in Data Regulation Compliance and Governance
Data location Discovery and PII Management
It’s essential to learn the location of all customer data in all systems. The first action a company need to do is creating a risk assessment with a guess about what kind of data is likely to be requested how many requests might be expected. Locating all customer data and ensuring GDPR compliant management can be a daunting task, but there are options for automating those processes.
With AI, one can quite easily recognize concepts like ‘person names,’ which is important in this context. To find out how many documents you have that refer to persons (as opposed to companies), or to find out how many documents, social security numbers, phone numbers you have in any one repository, one can combine those analytics, and then begin to understand that the odds are that they have a lot of personal data in this repository, which provides a way to prioritize in the context of GDPR.
For example, M-Files uses Artificial Intelligence to streamline the process of locating and managing PII (personally identifiable information), which often resides in a host of different systems, network folders and other information silos, making it even more challenging for companies to control and protect it.
AI based data cataloguing
A solution that utilizes AI-based machine learning techniques to improve tracking and cataloging data across hybrid deployments can help companies do more accurate reporting while boosting overall efforts to achieve GDPR compliance. By automating the process of discovering and properly recording all types of data and data relationships, organizations can develop a comprehensive view of compliance-related data tucked away in non-traditional sources such as email, social media, and financial transactions – a near-impossible task using traditional solutions and manual processes.
Contextual Engines for Diversely Changing Data Environments
The GDPR changes how companies should look at storage of data. The risk of data getting compromised is increased based on how is stored, in how many different systems it’s stored, how many people are involved in that process, and how long it’s kept. Now that PII on job applications is regulated under GDPR, a company may want to routinely get rid of that data fairly quickly to avoid risk of data breach or audit. There are those kinds of procedural things that organizations will have to really think about.
There are instances where completely removing all data is impossible. You have to retain some data like billing records and there might be conflicting regulations, such as records retention laws. Now, if the citizen asks you to remove that, it’s going to add a lot of complexity to the process, in terms of understanding what data can be removed from the system and what cannot be removed. There will be conflicting situations where this regulation says something, and then you might have an Accounting Act or something in a local or state regulation that says something else.
This requires contextual engines built using AI that can be highly context aware based on the changing circumstances around the data and create a plan of how each data should be stored, managed and purged. This can also provide accurate insights on the levels of encryption and complex data storage techniques that need to be implemented for different data, thereby conserving hardware resources and increasing protection against malignant attacks and data breaches while minimizing risk of GDPR violations.
Working out the Kinks in AI led GDPR
GDPR aims to give EU citizens greater control over their personal data and to hold companies accountable on matters such as data use consent, data anonymization, breach notification, cross-border data transfer, and appointment of data protection officers. For example, organizations will have to honor individuals’ “right to be forgotten,” where applicable — fulfilling requests to delete information and providing proof that it was done. They must also obtain explicit, rather than implied, permission to gather data. And they are required to allow people to see their own data in a commonly readable format.
The system will undoubtedly work those issues out, but, in the meantime, companies should roll up their sleeves and take a thorough, systematic multi-step approach. The multi-step strategy should include:
Data. A comprehensive plan to document and categorize the personal data an organization has, where it came from, and who it is shared with.
Privacy notices. A review of privacy notices to align with new GDPR requirements.
Individuals’ rights. People have enhanced rights, such as the right to be forgotten, and new rights, such as data portability. This demands a check of procedures, processes, and data formats to ensure the new terms can be met.
Legal basis for processing personal data. Companies will need to document the legal basis for processing personal data, in privacy notices and other places.
Consent. Companies should review how they obtain and record consent, as they will be required to document it. Consent must be a positive indication; it cannot be inferred. An audit trail is necessary.
Children. There will be new safeguards for children’s data. Companies will need to establish systems to verify individuals’ ages and gather parental or guardian consent for data-processing activity.
Data breaches. New breach notification rules and new fines will affect many organizations, making it essential to understand how to detect, report, and investigate personal data breaches.
Privacy by design. A privacy by design and data minimization approach will become an express legal requirement. It’s important for organizations to plan how to meet the new terms.
Data protection officers. Organizations may need to designate a data protection officer and figure out who will take responsibility for compliance and how they will position the role.
Will GDPR Aligning Measures Be Necessarily Disruptive?
Many companies are going through significant changes as a result of the new regulations, and the efficiency and speed the AI-powered regulation compliance platform offer can significantly help streamline the entire process if companies want to ensure compliance.
Hence, there are plenty of challenges keeping CIOs up at night. By taking a more intelligence-driven approach to data discovery, preparation, management, and governance, the impending GDPR mandate doesn’t have to be one of them.
Related Posts
AIQRATIONS
Data Glut to Data Abundance; The Fight for Data Supremacy – Enter the Age of Algorithm Ascendancy
Add Your Heading Text Here
The definition of Data Breaches in current times have evolved from, happening under ‘malicious intent’, to also cover those which have been occurring as a consequences of bad data policies and regulation oversight. This means even policies that have been deemed legally screened might end up, in certain circumstances, in opening doors to some significant breach of data, user privacy and ultimately user trust.
For example, recently, Facebook banned data analytics company Cambridge Analytica from buying ads from its platform. The voter profiling firm allegedly procured 50 million physiological profiles of people through a research application developer Aleksandr Kogan, who broke Facebook’s data policies by sharing data from his personality-prediction app, that mined information from the social network’s users.
Kogan’s app, ‘thisisyourdigitallife’ harvested data not only from the individuals participating in the game, but also from everyone on their friend list. Since Facebook’s terms of services weren’t so clear back in 2014 the app allowed Kogan to share the data with third parties like Cambridge Analytica. This means policy wise it is a grey area whether the breach could be considered ‘unauthorized’, but it is clear that it happened without any express authorization from Facebook. This personal information was subsequently used to target voters and sway public opinion
This is different than the site hackings where credit card information was actually stolen at major retailers, the company in question, Cambridge Analytica, actually had the right to use this data. The problem is they used this information without permission in a way that was overtly deceptive to both Facebook users and Facebook itself.
Fallouts of Data Breaches: Developers left to deal with Tighter Controls
Facebook will become less attractive to app developers if it tightens norms for data usage as a fallout of the prevailing controversy over alleged misuse of personal information mined from its platform, say industry members.
India has the second largest developer base for Facebook, a community that builds apps and games on the platform and engage its users. With 241 million users, the country last July over took the US as the largest userbase for the social network platform.
There will be more scrutiny now. When you do, say, a sign on. The basic data (you can get) is the user’s name and email address, even which will undergo tremendous scrutiny before they approve it. That will have an impact on the timeline. The viral effect) could decrease. Now, without explicit rights from users, you cannot reach out to his/her contacts. Thus, the overhead goes on to the developers because of such data breaches, which shouldn’t have occurred in the first place had the policies surrounding user data were more distinct and clear.
Renewed Focus to Conflicting Data Policies and Human Factors
These kinds of passive breaches that happen because of unclear and conflicting policies instituted by Facebook provides us a very clear example of how active breaches (involving malicious attacks) and passive breaches (involving technically authorized but legally unsavoury data sharing) need to be given equal priority and should both be considered pertinent focus of data protection.
While Facebook CEO Mark Zuckerberg has vowed to make changes to prevent these types of information grabs from happening in the future, many of those tweaks will be presumably made internally. Individuals and companies still need to take their own action to ensure their information remains as protected and secure as possible.
Humans are the weakest link in data protection, and potentially even the leading cause for the majority of incidents in recent years. This debacle demonstrates that cliché to its full extent. Experts believe that any privacy policy needs to take into account all third parties who get access to the data too. While designing a privacy policy one needs to keep the entire ecosystem in mind. For instance, a telecom player or a bank while designing their privacy policy will have to take into account third parties like courier agencies, teleworking agencies, and call centers who have access to all their data and what kind of access they have.
Dealing with Privacy in Analytics: Privacy-Preserving Data Mining Algorithms
The problem of privacy-preserving data mining has become more important in recent years because of the increasing ability to store personal data about users, and the increasing sophistication of data mining algorithms to leverage this information. A number of algorithmic techniques such as randomization and k-anonymity, have been suggested in recent years in order to perform privacy-preserving data mining. Different communities have explored parallel lines of work in regards to privacy preserving data mining:
Privacy-Preserving Data Publishing: These techniques tend to study different transformation methods associated with privacy. These techniques include methods such as randomization, k-anonymity, and l-diversity. Another related issue is how the perturbed data can be used in conjunction with classical data mining methods such as association rule mining.
Changing the results of Data Mining Applications to preserve privacy: In many cases, the results of data mining applications such as association rule or classification rule mining can compromise the privacy of the data. This has spawned a field of privacy in which the results of data mining algorithms such as association rule mining are modified in order to preserve the privacy of the data.
Query Auditing: Such methods are akin to the previous case of modifying the results of data mining algorithms. Here, we are either modifying or restricting the results of queries.
Cryptographic Methods for Distributed Privacy: In many cases, the data may be distributed across multiple sites, and the owners of the data across these different sites may wish to compute a common function. In such cases, a variety of cryptographic protocols may be used in order to communicate among the different sites, so that secure function computation is possible without revealing sensitive information.
Privacy Engineering with AI
Privacy by Design is a policy concept was introduced the Data Commissioner’s Conference in Jerusalem, and over 120 different countries agreed they should contemplate privacy in the build, in the design. That means not just the technical tools you buy and consume, [but] how you operationalize, how you run your business; how you organize around your business and data.
Privacy engineering is using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the most basic sense of engineering to say, “What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to build in and solve for privacy challenges?”
It’s not just about individual machines making correlations; it’s about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with personally identifiable information. For AI, it is just sort of the next layer of that. We’ve gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?
Also, there is the question of ‘context’. The simplistic policy of asking users if an application can access different venues of their data is very reductive. This does not, in an measure give an understanding of how that data is going to be leveraged and what other information about the users would the application be able to deduce and mine from the said data? The concept of privacy is extremely sensitive and not only depends on what data but also for what purpose. Have you given consent to having it used for a particular purpose? So, I think AI could play a role in making sense of whether data is processed securely.
The Final Word: Breach of Privacy as Crucial as Breach of Data
It is undeniably so that we are slowly giving equal, if not more importance to breach of privacy as compared to breach of data, which will eventually target even the policies which though legally acceptable or passively mandated but resulted in compromise of privacy and loss of trust. Because there is no point claiming one is legally safe in their policy perusal if the end result leads to the users being at the receiving end.
This would require a comprehensive analysis of data streams, not only internal to an application ecosystem, like Facebook, but also the extended ecosystem involving all the players it is channeling the data sharing to, albeit in a policy-protected manner. This will require AI enabled contextual decision making to come to terms as what policies could be considered as eventually breaching the privacy in certain circumstances.
Longer-term, though, you’ve got to write that ombudsman. We need to be able to engineer an AI to serve as an ombudsman for the AI itself.
Related Posts
AIQRATIONS
Design Thinking | Behavioural Sciences: Strategic Elements to Building a Successful AI Enterprise
Add Your Heading Text Here
Today’s artificial intelligence (AI) revolution has been made possible by the algorithm revolution. The machine learning algorithms researchers have been developing for decades, when cleverly applied to today’s web-scale data sets, can yield surprisingly good forms of intelligence. For instance, the United States Postal Service has long used neural network models to automatically read handwritten zip code digits. Today’s deep learning neural networks can be trained on millions of electronic photographs to identify faces, and similar algorithms may increasingly be used to navigate automobiles and identify tumors in X-rays. The IBM Watson information retrieval system could triumph on the game show “Jeopardy!” partly because most human knowledge is now stored electronically.
But current AI technologies are a collection of big data-driven point solutions, and algorithms are reliable only to the extent that the data used to train them is complete and appropriate. One-off or unforeseen events that humans can navigate using common sense can lead algorithms to yield nonsensical outputs.
Design thinking is defined as human-centric design that builds upon the deep understanding of our users (e.g., their tendencies, propensities, inclinations, behaviours) to generate ideas, build prototypes, share what you’ve made, embrace the art of failure (i.e., fail fast but learn faster) and eventually put your innovative solution out into the world. And fortunately for us humans (who really excel at human-centric things), there is a tight correlation between the design thinking and artificial intelligence.
Artificial intelligence technologies could reshape economies and societies, but more powerful algorithms do not automatically yield improved business or societal outcomes. Human-centered design thinking can help organizations get the most out of cognitive technologies.
Divergence from More Powerful Intelligence To More Creative Intelligence
While algorithms can automate many routine tasks, the narrow nature of data-driven AI implies that many other tasks will require human involvement. In such cases, algorithms should be viewed as cognitive tools capable of augmenting human capabilities and integrated into systems designed to go with the grain of human—and organizational—psychology. We don’t want to ascribe to AI algorithms more intelligence than is really there. They may be smarter than humans at certain tasks, but more generally we need to make sure algorithms are designed to help us, not do an end run around our common sense.
Design Thinking at Enterprise Premise
Although cognitive design thinking is in its early stages in many enterprises, the implications are evident. Eschewing versus embracing design thinking can mean the difference between failure and success. For example, a legacy company that believes photography hinges on printing photographs could falter compared to an internet startup that realizes many customers would prefer to share images online without making prints, and embraces technology that learns faces and automatically generates albums to enhance their experience.
To make design thinking meaningful for consumers, companies can benefit from carefully selecting use cases and the information they feed into AI technologies. In determining which available data is likely to generate desired results, enterprises can start by focusing on their individual problems and business cases, create cognitive centres of excellence, adopt common platforms to digest and analyze data, enforce strong data governance practices, and crowdsource ideas from employees and customers alike.
In assessing what constitutes proper algorithmic design, organizations may confront ethical quandaries that expose them to potential risk. Unintended algorithmic bias can lead to exclusionary and even discriminatory practices. For example, facial recognition software trained on insufficiently diverse data sets may be largely incapable of recognizing individuals with different skin tones. This could cause problems in predictive policing, and even lead to misidentification of crime suspects. If the training data sets aren’t really that diverse, any face that deviates too much from the established norm will be harder to detect. Accordingly, across many fields, we can start thinking about how we create more inclusive code and employ inclusive coding practices.
CXO Strategy for Cognitive Design Thinking
CIOs can introduce cognitive design thinking to their organizations by first determining how it can address problems that conventional technologies alone cannot solve. The technology works with the right use cases, data, and people, but demonstrating value is not always simple. However, once CIOs have proof points that show the value of cognitive design thinking, they can scale them up over time.
CIOs benefit from working with business stakeholders to identify sources of value. It is also important to involve end users in the design and conception of algorithms used to automate or augment cognitive tasks. Make sure people understand the premise of the model so they can pragmatically balance algorithm results with other information.
Enterprise Behavioral Science – From Insights to Influencing Business Decisions
Every January, how many people do you know say that they want to resolve to save more, spend less, eat better, or exercise more? These admirable goals are often proclaimed with the best of intentions, but are rarely achieved. If people were purely logical, we would all be the healthiest versions of ourselves.
However, the truth is that humans are not 100% rational; we are emotional creatures that are not always predictable. Behavioral economics evolved from this recognition of human irrationality. Behavioral economics is a method of economic analysis that applies psychological insights into human behavior to explain economic decision-making.
Decision making is one of the central activities of business – hundreds of billions of decisions are made everyday. Decision making sits at the heart of innovation, growth, and profitability, and is foundational to competitiveness. Despite this degree of importance, decision making is poorly understood, and badly supported by tools. A study by Bain & Company found that decision effectiveness is 95% correlated with companies’ financial performance.
Enterprise Behavioral Science is not only about understanding potential outcomes, but to completely change outcomes, and more specifically, change the way in which people behave. Behavioral Science tells us that to make a fundamental change in behavior that will affect the long-term outcome of a process, we must insert an inflection point.
As an example, you are a sales rep and two years ago your revenue was $1 million. Last year it was $1.1 million, and this year you expect $1.2 million in sales. The trend is clear, and your growth has been linear and predictable. However, there is a change in company leadership and your management has increased your quota to $2 million for next year. What is going to motivate you to almost double your revenues? The difference between expectations ($2 million) and reality ($1.2 million) is often referred to as the “behavioral gap” . When the behavioral gap is significant, an inflection point is needed to close that gap. The right incentive can initiate an inflection point and influence a change in behavior. Perhaps that incentive is an added bonus, President’s Club eligibility, a promotion, etc.
Cognitive Design Thinking – The New Indispensable Reskilling Avenue
Artificial intelligence, machine learning, big data analytics and mobile and software development will be the top technology areas where the need for re-skilling will be the highest. India will need 700 million skilled workforce by 2022 to meet the demands of a growing economy. Hence, while there is a high probability that machine learning and artificial intelligence will play an important role in whatever job you hold in the future, there is one way to “future-proof” your career…embrace the power of design thinking.
In fact, integrating design thinking and artificial intelligence can give you “super powers” that future-proof whatever career you decide to pursue. To meld these two disciplines together, one must:
- Understand where and how artificial intelligence and behavioural science can impact your business initiatives. While you won’t need to write machine learning algorithms, business leaders do need to learn how to “Think like a data scientist” in order understand how AI can optimize key operational processes, reduce security and regulatory risks, uncover new monetization opportunities.
- Understand how design thinking techniques, concepts and tools can create a more compelling and emphatic user experience with a “delightful” user engagement through superior insights into your customers’ usage objectives, operating environment and impediments to success.
Design thinking is a mindset. IT firms are trying to move up the curve. Higher-end services that companies can charge more is to provide value and for that you need to know that end-customers needs. For example, to provide value services to banking customers is to find out what the bank’s customer needs are in that country the banking client is based. Latent needs come from a design thinking philosophy where you observe customer data, patterns and provide a solution that the customer does not know. Therefore, Companies will hire design thinkers as they can predict what the consumer does not know and hence charge for the product/service from their clients. Idea in design thinking is to provide agile product creation or solutions.
Without Design Thinking & Behavioural Science, AI Will be Only an Incremental Value
Though organizations understand the opportunity that big data presents, many struggles to find a way to unlock its value and use it in tandem with design thinking – making “big data a colossal waste of time & money.” Only by combining quantitative insights gathered using AI, machine/deep learning, and qualitative research through behavioural science, and finally design thinking to uncover hidden patterns and leveraging it to understand what the customer would want, will we be able to paint a complete picture of the problem at hand, and help drive towards a solution that would create value for all stakeholders.